Test Report: Docker_Linux_crio_arm64 21753

                    
                      37d7943b58d61ad05591f3a5d0091cda14132c69:2025-10-17:41947
                    
                

Test fail (44/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.28
35 TestAddons/parallel/Registry 20.88
36 TestAddons/parallel/RegistryCreds 0.49
37 TestAddons/parallel/Ingress 145.14
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 6.38
41 TestAddons/parallel/CSI 46.19
42 TestAddons/parallel/Headlamp 3.22
43 TestAddons/parallel/CloudSpanner 5.28
44 TestAddons/parallel/LocalPath 8.35
45 TestAddons/parallel/NvidiaDevicePlugin 6.3
46 TestAddons/parallel/Yakd 6.3
98 TestFunctional/parallel/ServiceCmdConnect 603.64
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.16
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.26
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
147 TestFunctional/parallel/ServiceCmd/DeployApp 600.83
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
154 TestFunctional/parallel/ServiceCmd/Format 0.46
155 TestFunctional/parallel/ServiceCmd/URL 0.48
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 514.88
175 TestMultiControlPlane/serial/DeleteSecondaryNode 5.51
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 4.94
177 TestMultiControlPlane/serial/StopCluster 14.15
178 TestMultiControlPlane/serial/RestartCluster 112
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 4.18
180 TestMultiControlPlane/serial/AddSecondaryNode 90.81
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 4.44
191 TestJSONOutput/pause/Command 2.3
197 TestJSONOutput/unpause/Command 2.04
281 TestPause/serial/Pause 6.45
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.39
303 TestStartStop/group/old-k8s-version/serial/Pause 6.41
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.57
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.1
321 TestStartStop/group/no-preload/serial/Pause 8.97
327 TestStartStop/group/embed-certs/serial/Pause 7.38
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.62
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.08
341 TestStartStop/group/newest-cni/serial/Pause 6.87
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.58
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable volcano --alsologtostderr -v=1: exit status 11 (281.712438ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:35.441460  266353 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:35.442078  266353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:35.442118  266353 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:35.442137  266353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:35.442497  266353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 18:59:35.442858  266353 mustload.go:65] Loading cluster: addons-379549
	I1017 18:59:35.443313  266353 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:35.443354  266353 addons.go:606] checking whether the cluster is paused
	I1017 18:59:35.443501  266353 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:35.443545  266353 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:59:35.444055  266353 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:59:35.462324  266353 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:35.462386  266353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:59:35.482525  266353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:59:35.588003  266353 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:35.588132  266353 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:35.618131  266353 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 18:59:35.618157  266353 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 18:59:35.618163  266353 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 18:59:35.618168  266353 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 18:59:35.618171  266353 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 18:59:35.618175  266353 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 18:59:35.618178  266353 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 18:59:35.618181  266353 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 18:59:35.618184  266353 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 18:59:35.618191  266353 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 18:59:35.618194  266353 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 18:59:35.618198  266353 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 18:59:35.618201  266353 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 18:59:35.618204  266353 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 18:59:35.618208  266353 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 18:59:35.618213  266353 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 18:59:35.618220  266353 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 18:59:35.618225  266353 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 18:59:35.618229  266353 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 18:59:35.618232  266353 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 18:59:35.618237  266353 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 18:59:35.618247  266353 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 18:59:35.618251  266353 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 18:59:35.618254  266353 cri.go:89] found id: ""
	I1017 18:59:35.618307  266353 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:35.631870  266353 out.go:203] 
	W1017 18:59:35.632959  266353 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:35.632984  266353 out.go:285] * 
	* 
	W1017 18:59:35.639418  266353 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:35.641155  266353 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.153291ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003305202s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004654649s
addons_test.go:392: (dbg) Run:  kubectl --context addons-379549 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-379549 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-379549 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.329037778s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 ip
2025/10/17 19:00:06 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable registry --alsologtostderr -v=1: exit status 11 (268.795199ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:06.723234  267447 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:06.724263  267447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:06.724325  267447 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:06.724349  267447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:06.724705  267447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:00:06.725062  267447 mustload.go:65] Loading cluster: addons-379549
	I1017 19:00:06.725514  267447 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:06.725558  267447 addons.go:606] checking whether the cluster is paused
	I1017 19:00:06.725709  267447 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:06.725752  267447 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:00:06.726289  267447 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:00:06.744947  267447 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:06.745008  267447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:00:06.763843  267447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:00:06.867241  267447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:06.867333  267447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:06.902053  267447 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:00:06.902077  267447 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:00:06.902083  267447 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:00:06.902087  267447 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:00:06.902091  267447 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:00:06.902095  267447 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:00:06.902135  267447 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:00:06.902147  267447 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:00:06.902151  267447 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:00:06.902158  267447 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:00:06.902168  267447 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:00:06.902172  267447 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:00:06.902175  267447 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:00:06.902179  267447 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:00:06.902182  267447 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:00:06.902188  267447 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:00:06.902219  267447 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:00:06.902225  267447 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:00:06.902228  267447 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:00:06.902232  267447 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:00:06.902238  267447 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:00:06.902247  267447 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:00:06.902250  267447 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:00:06.902254  267447 cri.go:89] found id: ""
	I1017 19:00:06.902324  267447 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:06.917675  267447 out.go:203] 
	W1017 19:00:06.920581  267447 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:06.920665  267447 out.go:285] * 
	* 
	W1017 19:00:06.926678  267447 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:06.929623  267447 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (20.88s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.042555ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-379549
addons_test.go:332: (dbg) Run:  kubectl --context addons-379549 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (253.848237ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:35.754132  268399 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:35.754747  268399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:35.755042  268399 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:35.755066  268399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:35.755450  268399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:00:35.755804  268399 mustload.go:65] Loading cluster: addons-379549
	I1017 19:00:35.756920  268399 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:35.756971  268399 addons.go:606] checking whether the cluster is paused
	I1017 19:00:35.757132  268399 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:35.757173  268399 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:00:35.757651  268399 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:00:35.776222  268399 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:35.776285  268399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:00:35.792892  268399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:00:35.902937  268399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:35.903032  268399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:35.931550  268399 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:00:35.931580  268399 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:00:35.931594  268399 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:00:35.931599  268399 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:00:35.931603  268399 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:00:35.931608  268399 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:00:35.931611  268399 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:00:35.931615  268399 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:00:35.931618  268399 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:00:35.931624  268399 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:00:35.931630  268399 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:00:35.931633  268399 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:00:35.931636  268399 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:00:35.931640  268399 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:00:35.931644  268399 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:00:35.931651  268399 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:00:35.931655  268399 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:00:35.931659  268399 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:00:35.931662  268399 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:00:35.931665  268399 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:00:35.931671  268399 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:00:35.931676  268399 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:00:35.931680  268399 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:00:35.931683  268399 cri.go:89] found id: ""
	I1017 19:00:35.931738  268399 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:35.944872  268399 out.go:203] 
	W1017 19:00:35.945962  268399 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:35.945990  268399 out.go:285] * 
	* 
	W1017 19:00:35.952081  268399 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:35.953574  268399 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-379549 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-379549 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-379549 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2016ec58-5586-4534-9959-3c9681eb5f08] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2016ec58-5586-4534-9959-3c9681eb5f08] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003803167s
I1017 19:00:29.256603  259596 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.225985644s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-379549 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-379549
helpers_test.go:243: (dbg) docker inspect addons-379549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66",
	        "Created": "2025-10-17T18:57:12.179689816Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 260760,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T18:57:12.241795967Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/hostname",
	        "HostsPath": "/var/lib/docker/containers/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/hosts",
	        "LogPath": "/var/lib/docker/containers/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66-json.log",
	        "Name": "/addons-379549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-379549:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-379549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66",
	                "LowerDir": "/var/lib/docker/overlay2/3e4eb3a0f914e87e9420aea224c0e4dea59ac71baf8770cf39cdb3283a5258ee-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e4eb3a0f914e87e9420aea224c0e4dea59ac71baf8770cf39cdb3283a5258ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e4eb3a0f914e87e9420aea224c0e4dea59ac71baf8770cf39cdb3283a5258ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e4eb3a0f914e87e9420aea224c0e4dea59ac71baf8770cf39cdb3283a5258ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-379549",
	                "Source": "/var/lib/docker/volumes/addons-379549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-379549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-379549",
	                "name.minikube.sigs.k8s.io": "addons-379549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f0e0e97287944811fc96deec392fc47351a9a255038b63627692f47b83a8471",
	            "SandboxKey": "/var/run/docker/netns/2f0e0e972879",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-379549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:fd:d3:66:0f:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3b67371b301eeb2c9b0127b37d48aff81f3b763f5b36ea0e3cc33c895a80c6ed",
	                    "EndpointID": "959438c556ae4a71d046ca098ea53ba78c0c756e8bb3adc2770022e46ed75775",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-379549",
	                        "55fec2c4916f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-379549 -n addons-379549
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-379549 logs -n 25: (1.621770719s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-786214                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-786214 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ --download-only -p binary-mirror-789835 --alsologtostderr --binary-mirror http://127.0.0.1:35757 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-789835   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-789835                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-789835   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ addons  │ enable dashboard -p addons-379549                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-379549                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-379549 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-379549 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-379549 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-379549 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-379549 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ ip      │ addons-379549 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │ 17 Oct 25 19:00 UTC │
	│ addons  │ addons-379549 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-379549 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-379549 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ ssh     │ addons-379549 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-379549 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-379549 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-379549                                                                                                                                                                                                                                                                                                                                                                                           │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │ 17 Oct 25 19:00 UTC │
	│ addons  │ addons-379549 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-379549 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-379549 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ ssh     │ addons-379549 ssh cat /opt/local-path-provisioner/pvc-5684922c-aed9-497d-9bbf-0e02c327a0d2_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │ 17 Oct 25 19:00 UTC │
	│ addons  │ addons-379549 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-379549 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:01 UTC │                     │
	│ ip      │ addons-379549 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 19:02 UTC │ 17 Oct 25 19:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:45.923022  260360 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:45.923196  260360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:45.923226  260360 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:45.923246  260360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:45.923522  260360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 18:56:45.924012  260360 out.go:368] Setting JSON to false
	I1017 18:56:45.924858  260360 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5957,"bootTime":1760721449,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 18:56:45.924952  260360 start.go:141] virtualization:  
	I1017 18:56:45.928245  260360 out.go:179] * [addons-379549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 18:56:45.931950  260360 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 18:56:45.932016  260360 notify.go:220] Checking for updates...
	I1017 18:56:45.937881  260360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:45.940878  260360 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 18:56:45.943708  260360 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 18:56:45.946543  260360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 18:56:45.949551  260360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 18:56:45.952713  260360 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:45.978539  260360 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 18:56:45.978728  260360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:46.045112  260360 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-17 18:56:46.035136305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 18:56:46.045223  260360 docker.go:318] overlay module found
	I1017 18:56:46.048318  260360 out.go:179] * Using the docker driver based on user configuration
	I1017 18:56:46.051151  260360 start.go:305] selected driver: docker
	I1017 18:56:46.051174  260360 start.go:925] validating driver "docker" against <nil>
	I1017 18:56:46.051188  260360 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 18:56:46.051879  260360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:46.106558  260360 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-17 18:56:46.097757384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 18:56:46.106725  260360 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:46.106947  260360 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 18:56:46.109907  260360 out.go:179] * Using Docker driver with root privileges
	I1017 18:56:46.112647  260360 cni.go:84] Creating CNI manager for ""
	I1017 18:56:46.112715  260360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:56:46.112728  260360 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 18:56:46.112798  260360 start.go:349] cluster config:
	{Name:addons-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-379549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1017 18:56:46.117672  260360 out.go:179] * Starting "addons-379549" primary control-plane node in "addons-379549" cluster
	I1017 18:56:46.120572  260360 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 18:56:46.123442  260360 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 18:56:46.126318  260360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:46.126438  260360 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 18:56:46.126330  260360 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 18:56:46.126451  260360 cache.go:58] Caching tarball of preloaded images
	I1017 18:56:46.126530  260360 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 18:56:46.126540  260360 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 18:56:46.126884  260360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/config.json ...
	I1017 18:56:46.126915  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/config.json: {Name:mk226279b9a196e1a7ebbe8a74e398252caee8a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:46.141959  260360 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 18:56:46.142111  260360 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 18:56:46.142130  260360 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1017 18:56:46.142141  260360 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1017 18:56:46.142150  260360 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1017 18:56:46.142155  260360 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1017 18:57:04.187220  260360 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1017 18:57:04.187256  260360 cache.go:232] Successfully downloaded all kic artifacts
	I1017 18:57:04.187285  260360 start.go:360] acquireMachinesLock for addons-379549: {Name:mka00eef85230c5dd15a7d8abde55ed543d50e6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:57:04.187401  260360 start.go:364] duration metric: took 97.146µs to acquireMachinesLock for "addons-379549"
	I1017 18:57:04.187436  260360 start.go:93] Provisioning new machine with config: &{Name:addons-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-379549 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 18:57:04.187535  260360 start.go:125] createHost starting for "" (driver="docker")
	I1017 18:57:04.191083  260360 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1017 18:57:04.191338  260360 start.go:159] libmachine.API.Create for "addons-379549" (driver="docker")
	I1017 18:57:04.191388  260360 client.go:168] LocalClient.Create starting
	I1017 18:57:04.191540  260360 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem
	I1017 18:57:05.215779  260360 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem
	I1017 18:57:05.364193  260360 cli_runner.go:164] Run: docker network inspect addons-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 18:57:05.380198  260360 cli_runner.go:211] docker network inspect addons-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 18:57:05.380300  260360 network_create.go:284] running [docker network inspect addons-379549] to gather additional debugging logs...
	I1017 18:57:05.380321  260360 cli_runner.go:164] Run: docker network inspect addons-379549
	W1017 18:57:05.395850  260360 cli_runner.go:211] docker network inspect addons-379549 returned with exit code 1
	I1017 18:57:05.395884  260360 network_create.go:287] error running [docker network inspect addons-379549]: docker network inspect addons-379549: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-379549 not found
	I1017 18:57:05.395898  260360 network_create.go:289] output of [docker network inspect addons-379549]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-379549 not found
	
	** /stderr **
	I1017 18:57:05.396013  260360 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 18:57:05.412938  260360 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001be5450}
	I1017 18:57:05.412986  260360 network_create.go:124] attempt to create docker network addons-379549 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1017 18:57:05.413044  260360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-379549 addons-379549
	I1017 18:57:05.465282  260360 network_create.go:108] docker network addons-379549 192.168.49.0/24 created
	I1017 18:57:05.465325  260360 kic.go:121] calculated static IP "192.168.49.2" for the "addons-379549" container
	I1017 18:57:05.465400  260360 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 18:57:05.481289  260360 cli_runner.go:164] Run: docker volume create addons-379549 --label name.minikube.sigs.k8s.io=addons-379549 --label created_by.minikube.sigs.k8s.io=true
	I1017 18:57:05.498770  260360 oci.go:103] Successfully created a docker volume addons-379549
	I1017 18:57:05.498863  260360 cli_runner.go:164] Run: docker run --rm --name addons-379549-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-379549 --entrypoint /usr/bin/test -v addons-379549:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 18:57:07.611460  260360 cli_runner.go:217] Completed: docker run --rm --name addons-379549-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-379549 --entrypoint /usr/bin/test -v addons-379549:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.112558516s)
	I1017 18:57:07.611492  260360 oci.go:107] Successfully prepared a docker volume addons-379549
	I1017 18:57:07.611540  260360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:57:07.611564  260360 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 18:57:07.611632  260360 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-379549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 18:57:12.102739  260360 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-379549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.491065061s)
	I1017 18:57:12.102773  260360 kic.go:203] duration metric: took 4.491206399s to extract preloaded images to volume ...
	W1017 18:57:12.102956  260360 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 18:57:12.103072  260360 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 18:57:12.164473  260360 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-379549 --name addons-379549 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-379549 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-379549 --network addons-379549 --ip 192.168.49.2 --volume addons-379549:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 18:57:12.473779  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Running}}
	I1017 18:57:12.500020  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:12.518959  260360 cli_runner.go:164] Run: docker exec addons-379549 stat /var/lib/dpkg/alternatives/iptables
	I1017 18:57:12.572662  260360 oci.go:144] the created container "addons-379549" has a running status.
	I1017 18:57:12.572693  260360 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa...
	I1017 18:57:13.655449  260360 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 18:57:13.678029  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:13.699636  260360 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 18:57:13.699657  260360 kic_runner.go:114] Args: [docker exec --privileged addons-379549 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 18:57:13.740667  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:13.756834  260360 machine.go:93] provisionDockerMachine start ...
	I1017 18:57:13.756935  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:13.772867  260360 main.go:141] libmachine: Using SSH client type: native
	I1017 18:57:13.773184  260360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1017 18:57:13.773199  260360 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 18:57:13.915907  260360 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-379549
	
	I1017 18:57:13.915935  260360 ubuntu.go:182] provisioning hostname "addons-379549"
	I1017 18:57:13.915998  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:13.934912  260360 main.go:141] libmachine: Using SSH client type: native
	I1017 18:57:13.935235  260360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1017 18:57:13.935252  260360 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-379549 && echo "addons-379549" | sudo tee /etc/hostname
	I1017 18:57:14.089851  260360 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-379549
	
	I1017 18:57:14.089947  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:14.108002  260360 main.go:141] libmachine: Using SSH client type: native
	I1017 18:57:14.108322  260360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1017 18:57:14.108344  260360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-379549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-379549/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-379549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 18:57:14.252703  260360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 18:57:14.252732  260360 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 18:57:14.252760  260360 ubuntu.go:190] setting up certificates
	I1017 18:57:14.252770  260360 provision.go:84] configureAuth start
	I1017 18:57:14.252843  260360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-379549
	I1017 18:57:14.268726  260360 provision.go:143] copyHostCerts
	I1017 18:57:14.268815  260360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 18:57:14.268948  260360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 18:57:14.269016  260360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 18:57:14.269069  260360 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.addons-379549 san=[127.0.0.1 192.168.49.2 addons-379549 localhost minikube]
	I1017 18:57:14.624117  260360 provision.go:177] copyRemoteCerts
	I1017 18:57:14.624183  260360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 18:57:14.624228  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:14.642148  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:14.748041  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 18:57:14.764641  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 18:57:14.781215  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 18:57:14.798503  260360 provision.go:87] duration metric: took 545.715741ms to configureAuth
	I1017 18:57:14.798530  260360 ubuntu.go:206] setting minikube options for container-runtime
	I1017 18:57:14.798764  260360 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:14.798902  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:14.817109  260360 main.go:141] libmachine: Using SSH client type: native
	I1017 18:57:14.817445  260360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1017 18:57:14.817468  260360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 18:57:15.073484  260360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 18:57:15.073510  260360 machine.go:96] duration metric: took 1.31665209s to provisionDockerMachine
	I1017 18:57:15.073520  260360 client.go:171] duration metric: took 10.882122485s to LocalClient.Create
	I1017 18:57:15.073533  260360 start.go:167] duration metric: took 10.882196115s to libmachine.API.Create "addons-379549"
	I1017 18:57:15.073540  260360 start.go:293] postStartSetup for "addons-379549" (driver="docker")
	I1017 18:57:15.073551  260360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 18:57:15.073682  260360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 18:57:15.073737  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:15.091582  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:15.196744  260360 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 18:57:15.200325  260360 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 18:57:15.200354  260360 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 18:57:15.200367  260360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 18:57:15.200436  260360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 18:57:15.200463  260360 start.go:296] duration metric: took 126.916952ms for postStartSetup
	I1017 18:57:15.200825  260360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-379549
	I1017 18:57:15.217007  260360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/config.json ...
	I1017 18:57:15.217312  260360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 18:57:15.217362  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:15.233525  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:15.333718  260360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 18:57:15.338635  260360 start.go:128] duration metric: took 11.15108393s to createHost
	I1017 18:57:15.338667  260360 start.go:83] releasing machines lock for "addons-379549", held for 11.151249103s
	I1017 18:57:15.338742  260360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-379549
	I1017 18:57:15.355117  260360 ssh_runner.go:195] Run: cat /version.json
	I1017 18:57:15.355169  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:15.355201  260360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 18:57:15.355271  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:15.378592  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:15.380055  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:15.569774  260360 ssh_runner.go:195] Run: systemctl --version
	I1017 18:57:15.575948  260360 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 18:57:15.610338  260360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 18:57:15.614453  260360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 18:57:15.614526  260360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 18:57:15.641529  260360 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 18:57:15.641550  260360 start.go:495] detecting cgroup driver to use...
	I1017 18:57:15.641586  260360 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 18:57:15.641635  260360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 18:57:15.657651  260360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 18:57:15.669534  260360 docker.go:218] disabling cri-docker service (if available) ...
	I1017 18:57:15.669627  260360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 18:57:15.686887  260360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 18:57:15.704885  260360 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 18:57:15.818804  260360 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 18:57:15.943700  260360 docker.go:234] disabling docker service ...
	I1017 18:57:15.943801  260360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 18:57:15.964351  260360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 18:57:15.977403  260360 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 18:57:16.097830  260360 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 18:57:16.216321  260360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 18:57:16.229441  260360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 18:57:16.243627  260360 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 18:57:16.243697  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.252363  260360 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 18:57:16.252437  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.262152  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.270961  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.279816  260360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 18:57:16.288177  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.296775  260360 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.311072  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.321422  260360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 18:57:16.330500  260360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 18:57:16.338919  260360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:57:16.465775  260360 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 18:57:16.594833  260360 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 18:57:16.594918  260360 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 18:57:16.598595  260360 start.go:563] Will wait 60s for crictl version
	I1017 18:57:16.598660  260360 ssh_runner.go:195] Run: which crictl
	I1017 18:57:16.602036  260360 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 18:57:16.626335  260360 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 18:57:16.626436  260360 ssh_runner.go:195] Run: crio --version
	I1017 18:57:16.657664  260360 ssh_runner.go:195] Run: crio --version
	I1017 18:57:16.688741  260360 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 18:57:16.691620  260360 cli_runner.go:164] Run: docker network inspect addons-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 18:57:16.708111  260360 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 18:57:16.711945  260360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 18:57:16.721686  260360 kubeadm.go:883] updating cluster {Name:addons-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-379549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 18:57:16.721796  260360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:57:16.721853  260360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 18:57:16.753916  260360 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 18:57:16.753938  260360 crio.go:433] Images already preloaded, skipping extraction
	I1017 18:57:16.753999  260360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 18:57:16.786045  260360 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 18:57:16.786121  260360 cache_images.go:85] Images are preloaded, skipping loading
	I1017 18:57:16.786250  260360 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 18:57:16.786382  260360 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-379549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-379549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 18:57:16.786615  260360 ssh_runner.go:195] Run: crio config
	I1017 18:57:16.844979  260360 cni.go:84] Creating CNI manager for ""
	I1017 18:57:16.845019  260360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:57:16.845041  260360 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 18:57:16.845065  260360 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-379549 NodeName:addons-379549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 18:57:16.845217  260360 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-379549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 18:57:16.845378  260360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 18:57:16.853109  260360 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 18:57:16.853224  260360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 18:57:16.860254  260360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 18:57:16.872683  260360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 18:57:16.885295  260360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1017 18:57:16.897975  260360 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1017 18:57:16.901837  260360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 18:57:16.911393  260360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:57:17.020146  260360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 18:57:17.037001  260360 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549 for IP: 192.168.49.2
	I1017 18:57:17.037078  260360 certs.go:195] generating shared ca certs ...
	I1017 18:57:17.037113  260360 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:17.037336  260360 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 18:57:17.352272  260360 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt ...
	I1017 18:57:17.352303  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt: {Name:mkd0682e9ec696a5dc3c6408bce8c9ab628da2b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:17.352545  260360 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key ...
	I1017 18:57:17.352560  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key: {Name:mk1b70c572c926b863145e313486a5bdd6a8745e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:17.352710  260360 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 18:57:18.438561  260360 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt ...
	I1017 18:57:18.438595  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt: {Name:mk28577ab9371ba91d63d0876a6982d2a222e4b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:18.438796  260360 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key ...
	I1017 18:57:18.438809  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key: {Name:mk29a7d483ab314486849922d4ed3f5ae86198c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:18.438894  260360 certs.go:257] generating profile certs ...
	I1017 18:57:18.438956  260360 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.key
	I1017 18:57:18.438975  260360 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt with IP's: []
	I1017 18:57:19.537712  260360 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt ...
	I1017 18:57:19.537744  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: {Name:mk6eaf62f01188e8fb25b1a3cb3b4a8aafb36db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:19.537939  260360 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.key ...
	I1017 18:57:19.537952  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.key: {Name:mk78bd2ed432cd9cc4b15baaa295e748d5ea633f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:19.538043  260360 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key.29479c62
	I1017 18:57:19.538065  260360 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt.29479c62 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1017 18:57:19.625229  260360 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt.29479c62 ...
	I1017 18:57:19.625258  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt.29479c62: {Name:mk514b25fe2233f248f1fe4ad25a562c05e30f40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:19.625422  260360 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key.29479c62 ...
	I1017 18:57:19.625433  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key.29479c62: {Name:mk3350940796e397f2e3d8e9d43c2a533084a50e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:19.625514  260360 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt.29479c62 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt
	I1017 18:57:19.625587  260360 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key.29479c62 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key
	I1017 18:57:19.625636  260360 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.key
	I1017 18:57:19.625651  260360 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.crt with IP's: []
	I1017 18:57:21.964513  260360 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.crt ...
	I1017 18:57:21.964553  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.crt: {Name:mk6de73cde00b4d1c013607eed0c20a102f7da1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:21.964755  260360 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.key ...
	I1017 18:57:21.964770  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.key: {Name:mk26f3413eac9176bc7d5de7fd6760ef830e1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:21.964962  260360 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 18:57:21.965013  260360 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 18:57:21.965042  260360 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 18:57:21.965069  260360 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 18:57:21.965697  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 18:57:21.984675  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 18:57:22.002685  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 18:57:22.023425  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 18:57:22.042344  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 18:57:22.060851  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 18:57:22.080145  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 18:57:22.099098  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 18:57:22.117661  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 18:57:22.135751  260360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 18:57:22.148649  260360 ssh_runner.go:195] Run: openssl version
	I1017 18:57:22.155063  260360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 18:57:22.163352  260360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:57:22.166973  260360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:57:22.167040  260360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:57:22.207924  260360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 18:57:22.216202  260360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 18:57:22.219993  260360 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 18:57:22.220045  260360 kubeadm.go:400] StartCluster: {Name:addons-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-379549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:57:22.220159  260360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:57:22.220248  260360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:57:22.251333  260360 cri.go:89] found id: ""
	I1017 18:57:22.251450  260360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 18:57:22.259122  260360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 18:57:22.266820  260360 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 18:57:22.266931  260360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 18:57:22.274761  260360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 18:57:22.274782  260360 kubeadm.go:157] found existing configuration files:
	
	I1017 18:57:22.274836  260360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 18:57:22.282682  260360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 18:57:22.282753  260360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 18:57:22.290594  260360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 18:57:22.298666  260360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 18:57:22.298730  260360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 18:57:22.306007  260360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 18:57:22.313971  260360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 18:57:22.314037  260360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 18:57:22.321858  260360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 18:57:22.330061  260360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 18:57:22.330124  260360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 18:57:22.337941  260360 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 18:57:22.379933  260360 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 18:57:22.379996  260360 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 18:57:22.401337  260360 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 18:57:22.401417  260360 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 18:57:22.401459  260360 kubeadm.go:318] OS: Linux
	I1017 18:57:22.401511  260360 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 18:57:22.401566  260360 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 18:57:22.401619  260360 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 18:57:22.401674  260360 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 18:57:22.401729  260360 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 18:57:22.401783  260360 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 18:57:22.401837  260360 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 18:57:22.401892  260360 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 18:57:22.401944  260360 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 18:57:22.469843  260360 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 18:57:22.469989  260360 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 18:57:22.470091  260360 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 18:57:22.480364  260360 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 18:57:22.484497  260360 out.go:252]   - Generating certificates and keys ...
	I1017 18:57:22.484661  260360 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 18:57:22.484758  260360 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 18:57:22.813074  260360 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 18:57:23.742162  260360 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 18:57:24.134338  260360 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 18:57:24.464535  260360 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 18:57:24.727828  260360 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 18:57:24.727963  260360 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-379549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 18:57:25.748111  260360 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 18:57:25.748283  260360 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-379549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 18:57:26.355224  260360 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 18:57:26.623683  260360 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 18:57:26.700096  260360 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 18:57:26.700208  260360 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 18:57:27.242667  260360 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 18:57:27.513928  260360 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 18:57:27.822697  260360 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 18:57:28.439146  260360 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 18:57:28.957866  260360 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 18:57:28.958709  260360 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 18:57:28.961634  260360 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 18:57:28.965127  260360 out.go:252]   - Booting up control plane ...
	I1017 18:57:28.965237  260360 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 18:57:28.965340  260360 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 18:57:28.965974  260360 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 18:57:28.986434  260360 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 18:57:28.986542  260360 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 18:57:28.995297  260360 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 18:57:28.995401  260360 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 18:57:28.995442  260360 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 18:57:29.124004  260360 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 18:57:29.124123  260360 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 18:57:30.624666  260360 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500904511s
	I1017 18:57:30.628265  260360 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 18:57:30.628363  260360 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1017 18:57:30.628456  260360 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 18:57:30.628555  260360 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 18:57:33.282137  260360 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.65328676s
	I1017 18:57:35.891599  260360 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.26328261s
	I1017 18:57:36.631486  260360 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001072811s
	I1017 18:57:36.651809  260360 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 18:57:36.677306  260360 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 18:57:36.691312  260360 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 18:57:36.691537  260360 kubeadm.go:318] [mark-control-plane] Marking the node addons-379549 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 18:57:36.704574  260360 kubeadm.go:318] [bootstrap-token] Using token: aj3xrv.v6mngpc276ee8slz
	I1017 18:57:36.709760  260360 out.go:252]   - Configuring RBAC rules ...
	I1017 18:57:36.709900  260360 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 18:57:36.712548  260360 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 18:57:36.722994  260360 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 18:57:36.732832  260360 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 18:57:36.737911  260360 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 18:57:36.742261  260360 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 18:57:37.037529  260360 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 18:57:37.482923  260360 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 18:57:38.036856  260360 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 18:57:38.038341  260360 kubeadm.go:318] 
	I1017 18:57:38.038419  260360 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 18:57:38.038425  260360 kubeadm.go:318] 
	I1017 18:57:38.038518  260360 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 18:57:38.038526  260360 kubeadm.go:318] 
	I1017 18:57:38.038562  260360 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 18:57:38.038624  260360 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 18:57:38.038694  260360 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 18:57:38.038700  260360 kubeadm.go:318] 
	I1017 18:57:38.038756  260360 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 18:57:38.038761  260360 kubeadm.go:318] 
	I1017 18:57:38.038812  260360 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 18:57:38.038816  260360 kubeadm.go:318] 
	I1017 18:57:38.038870  260360 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 18:57:38.038947  260360 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 18:57:38.039024  260360 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 18:57:38.039030  260360 kubeadm.go:318] 
	I1017 18:57:38.039129  260360 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 18:57:38.039210  260360 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 18:57:38.039215  260360 kubeadm.go:318] 
	I1017 18:57:38.039301  260360 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token aj3xrv.v6mngpc276ee8slz \
	I1017 18:57:38.039407  260360 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 \
	I1017 18:57:38.039428  260360 kubeadm.go:318] 	--control-plane 
	I1017 18:57:38.039432  260360 kubeadm.go:318] 
	I1017 18:57:38.039519  260360 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 18:57:38.039523  260360 kubeadm.go:318] 
	I1017 18:57:38.039607  260360 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token aj3xrv.v6mngpc276ee8slz \
	I1017 18:57:38.039715  260360 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 
	I1017 18:57:38.043863  260360 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 18:57:38.044108  260360 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 18:57:38.044253  260360 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 18:57:38.044291  260360 cni.go:84] Creating CNI manager for ""
	I1017 18:57:38.044301  260360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:57:38.049641  260360 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 18:57:38.052455  260360 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 18:57:38.057351  260360 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 18:57:38.057374  260360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 18:57:38.071848  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 18:57:38.347128  260360 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 18:57:38.347282  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:38.347373  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-379549 minikube.k8s.io/updated_at=2025_10_17T18_57_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=addons-379549 minikube.k8s.io/primary=true
	I1017 18:57:38.497621  260360 ops.go:34] apiserver oom_adj: -16
	I1017 18:57:38.497730  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:38.998231  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:39.498366  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:39.997813  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:40.498405  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:40.998566  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:41.498240  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:41.997931  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:42.498679  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:42.636269  260360 kubeadm.go:1113] duration metric: took 4.289030384s to wait for elevateKubeSystemPrivileges
	I1017 18:57:42.636303  260360 kubeadm.go:402] duration metric: took 20.416261286s to StartCluster
	I1017 18:57:42.636320  260360 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:42.636454  260360 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 18:57:42.636997  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:42.637251  260360 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 18:57:42.637409  260360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 18:57:42.637729  260360 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:42.637786  260360 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1017 18:57:42.637875  260360 addons.go:69] Setting yakd=true in profile "addons-379549"
	I1017 18:57:42.637895  260360 addons.go:238] Setting addon yakd=true in "addons-379549"
	I1017 18:57:42.637929  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.638711  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.638895  260360 addons.go:69] Setting inspektor-gadget=true in profile "addons-379549"
	I1017 18:57:42.638917  260360 addons.go:238] Setting addon inspektor-gadget=true in "addons-379549"
	I1017 18:57:42.638941  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.639473  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.640058  260360 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-379549"
	I1017 18:57:42.640081  260360 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-379549"
	I1017 18:57:42.640111  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.640566  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.644266  260360 addons.go:69] Setting metrics-server=true in profile "addons-379549"
	I1017 18:57:42.644308  260360 addons.go:238] Setting addon metrics-server=true in "addons-379549"
	I1017 18:57:42.644340  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.645002  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.676553  260360 addons.go:69] Setting cloud-spanner=true in profile "addons-379549"
	I1017 18:57:42.676611  260360 addons.go:238] Setting addon cloud-spanner=true in "addons-379549"
	I1017 18:57:42.676648  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.677369  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.680715  260360 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-379549"
	I1017 18:57:42.680764  260360 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-379549"
	I1017 18:57:42.680813  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.683518  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.692930  260360 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-379549"
	I1017 18:57:42.693028  260360 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-379549"
	I1017 18:57:42.693066  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.693773  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.700589  260360 addons.go:69] Setting registry=true in profile "addons-379549"
	I1017 18:57:42.700631  260360 addons.go:238] Setting addon registry=true in "addons-379549"
	I1017 18:57:42.700679  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.701342  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.709141  260360 addons.go:69] Setting default-storageclass=true in profile "addons-379549"
	I1017 18:57:42.709170  260360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-379549"
	I1017 18:57:42.709539  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.722131  260360 addons.go:69] Setting registry-creds=true in profile "addons-379549"
	I1017 18:57:42.742147  260360 addons.go:238] Setting addon registry-creds=true in "addons-379549"
	I1017 18:57:42.742217  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.742763  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.722777  260360 addons.go:69] Setting storage-provisioner=true in profile "addons-379549"
	I1017 18:57:42.763748  260360 addons.go:238] Setting addon storage-provisioner=true in "addons-379549"
	I1017 18:57:42.763796  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.764363  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.722806  260360 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-379549"
	I1017 18:57:42.777202  260360 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-379549"
	I1017 18:57:42.777592  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.722817  260360 addons.go:69] Setting volcano=true in profile "addons-379549"
	I1017 18:57:42.795928  260360 addons.go:238] Setting addon volcano=true in "addons-379549"
	I1017 18:57:42.795990  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.796559  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.722824  260360 addons.go:69] Setting volumesnapshots=true in profile "addons-379549"
	I1017 18:57:42.811899  260360 addons.go:238] Setting addon volumesnapshots=true in "addons-379549"
	I1017 18:57:42.811942  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.812499  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.723006  260360 out.go:179] * Verifying Kubernetes components...
	I1017 18:57:42.740077  260360 addons.go:69] Setting gcp-auth=true in profile "addons-379549"
	I1017 18:57:42.838928  260360 mustload.go:65] Loading cluster: addons-379549
	I1017 18:57:42.839150  260360 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:42.839404  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.850995  260360 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1017 18:57:42.740092  260360 addons.go:69] Setting ingress=true in profile "addons-379549"
	I1017 18:57:42.857859  260360 addons.go:238] Setting addon ingress=true in "addons-379549"
	I1017 18:57:42.857906  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.858365  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.740101  260360 addons.go:69] Setting ingress-dns=true in profile "addons-379549"
	I1017 18:57:42.872223  260360 addons.go:238] Setting addon ingress-dns=true in "addons-379549"
	I1017 18:57:42.872272  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.872789  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.877647  260360 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1017 18:57:42.895294  260360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:57:42.901880  260360 out.go:179]   - Using image docker.io/registry:3.0.0
	I1017 18:57:42.928910  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1017 18:57:42.932211  260360 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1017 18:57:42.935642  260360 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1017 18:57:42.935676  260360 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1017 18:57:42.935753  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:42.936121  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1017 18:57:42.939724  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1017 18:57:42.942373  260360 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 18:57:42.942390  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1017 18:57:42.942451  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:42.942642  260360 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1017 18:57:42.942856  260360 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1017 18:57:42.950261  260360 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1017 18:57:42.954140  260360 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1017 18:57:42.954519  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1017 18:57:42.954582  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:42.967692  260360 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1017 18:57:42.975920  260360 addons.go:238] Setting addon default-storageclass=true in "addons-379549"
	I1017 18:57:42.976801  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.977174  260360 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1017 18:57:42.977284  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:42.977423  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.999843  260360 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1017 18:57:42.975985  260360 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1017 18:57:43.000210  260360 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1017 18:57:43.000294  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:42.976108  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1017 18:57:43.030511  260360 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1017 18:57:43.034981  260360 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 18:57:43.035053  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1017 18:57:43.035148  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.049234  260360 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-379549"
	I1017 18:57:43.049276  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:43.049671  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:43.060566  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1017 18:57:43.064291  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1017 18:57:43.068157  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1017 18:57:43.071471  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1017 18:57:43.072116  260360 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 18:57:43.072140  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1017 18:57:43.072203  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.096418  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1017 18:57:43.100735  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1017 18:57:43.100776  260360 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1017 18:57:43.100954  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.110806  260360 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 18:57:43.111059  260360 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1017 18:57:43.111076  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1017 18:57:43.111150  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	W1017 18:57:43.128988  260360 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1017 18:57:43.149016  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:43.151117  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1017 18:57:43.151135  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1017 18:57:43.151195  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.151222  260360 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 18:57:43.151250  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 18:57:43.151300  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.187593  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.209826  260360 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1017 18:57:43.216725  260360 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:43.216840  260360 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 18:57:43.221029  260360 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 18:57:43.216884  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.216919  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.222310  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.228483  260360 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 18:57:43.228508  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1017 18:57:43.228597  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.260675  260360 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1017 18:57:43.262214  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.267030  260360 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:43.272196  260360 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 18:57:43.272265  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1017 18:57:43.272348  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.293085  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.320719  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.339427  260360 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1017 18:57:43.342544  260360 out.go:179]   - Using image docker.io/busybox:stable
	I1017 18:57:43.347934  260360 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 18:57:43.347958  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1017 18:57:43.348023  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.360676  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.373140  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.378959  260360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 18:57:43.379239  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.380331  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.415639  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	W1017 18:57:43.424895  260360 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 18:57:43.424944  260360 retry.go:31] will retry after 349.95256ms: ssh: handshake failed: EOF
	I1017 18:57:43.443878  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.447985  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.458620  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.459135  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	W1017 18:57:43.477370  260360 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 18:57:43.477400  260360 retry.go:31] will retry after 355.396394ms: ssh: handshake failed: EOF
	I1017 18:57:43.537319  260360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1017 18:57:43.776328  260360 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 18:57:43.776372  260360 retry.go:31] will retry after 257.451368ms: ssh: handshake failed: EOF
	I1017 18:57:44.071228  260360 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1017 18:57:44.071252  260360 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1017 18:57:44.075109  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1017 18:57:44.082129  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 18:57:44.091952  260360 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1017 18:57:44.092030  260360 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1017 18:57:44.128328  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 18:57:44.129910  260360 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:44.129971  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1017 18:57:44.138351  260360 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1017 18:57:44.138423  260360 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1017 18:57:44.146443  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 18:57:44.166967  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 18:57:44.175833  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 18:57:44.185767  260360 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1017 18:57:44.185841  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1017 18:57:44.190022  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 18:57:44.195800  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 18:57:44.286620  260360 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1017 18:57:44.286694  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1017 18:57:44.313694  260360 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1017 18:57:44.313771  260360 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1017 18:57:44.338722  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:44.344756  260360 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1017 18:57:44.344828  260360 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1017 18:57:44.408026  260360 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1017 18:57:44.408109  260360 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1017 18:57:44.450555  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 18:57:44.509009  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1017 18:57:44.537695  260360 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1017 18:57:44.537769  260360 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1017 18:57:44.564453  260360 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1017 18:57:44.564568  260360 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1017 18:57:44.577565  260360 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 18:57:44.577644  260360 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1017 18:57:44.675600  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1017 18:57:44.675678  260360 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1017 18:57:44.703583  260360 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1017 18:57:44.703602  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1017 18:57:44.720924  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 18:57:44.723942  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1017 18:57:44.723962  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1017 18:57:44.836815  260360 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:44.836885  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1017 18:57:44.842296  260360 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.463299723s)
	I1017 18:57:44.842375  260360 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1017 18:57:44.843429  260360 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.306087188s)
	I1017 18:57:44.844187  260360 node_ready.go:35] waiting up to 6m0s for node "addons-379549" to be "Ready" ...
	I1017 18:57:44.902656  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1017 18:57:44.902733  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1017 18:57:44.906466  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1017 18:57:44.978560  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:44.992637  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1017 18:57:44.992709  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1017 18:57:45.263214  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1017 18:57:45.263308  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1017 18:57:45.363193  260360 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-379549" context rescaled to 1 replicas
	I1017 18:57:45.572937  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1017 18:57:45.573018  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1017 18:57:45.872996  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1017 18:57:45.873067  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1017 18:57:46.078170  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1017 18:57:46.078248  260360 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1017 18:57:46.249322  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1017 18:57:46.249396  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1017 18:57:46.453107  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.370878855s)
	I1017 18:57:46.453192  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.378008832s)
	I1017 18:57:46.461537  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1017 18:57:46.461599  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1017 18:57:46.618733  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 18:57:46.618808  260360 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1017 18:57:46.770195  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1017 18:57:46.857249  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:47.393911  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.265510556s)
	I1017 18:57:47.455146  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.308600977s)
	W1017 18:57:48.860339  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:49.020395  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.83029357s)
	I1017 18:57:49.020502  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.824643462s)
	I1017 18:57:49.020834  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.682039164s)
	W1017 18:57:49.020893  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:49.020924  260360 retry.go:31] will retry after 286.470389ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:49.021012  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.570385128s)
	I1017 18:57:49.021217  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.512128972s)
	I1017 18:57:49.021254  260360 addons.go:479] Verifying addon registry=true in "addons-379549"
	I1017 18:57:49.021460  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.844435636s)
	I1017 18:57:49.021732  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.115203498s)
	I1017 18:57:49.021825  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.854787198s)
	I1017 18:57:49.021852  260360 addons.go:479] Verifying addon ingress=true in "addons-379549"
	I1017 18:57:49.021662  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.30071432s)
	I1017 18:57:49.022440  260360 addons.go:479] Verifying addon metrics-server=true in "addons-379549"
	I1017 18:57:49.022020  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.043376928s)
	W1017 18:57:49.022478  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 18:57:49.022491  260360 retry.go:31] will retry after 267.602499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 18:57:49.024729  260360 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-379549 service yakd-dashboard -n yakd-dashboard
	
	I1017 18:57:49.024792  260360 out.go:179] * Verifying ingress addon...
	I1017 18:57:49.024837  260360 out.go:179] * Verifying registry addon...
	I1017 18:57:49.029126  260360 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1017 18:57:49.029179  260360 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1017 18:57:49.039799  260360 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 18:57:49.039818  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:49.040326  260360 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 18:57:49.040345  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 18:57:49.049406  260360 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1017 18:57:49.291053  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:49.308089  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:49.322206  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.551902433s)
	I1017 18:57:49.322288  260360 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-379549"
	I1017 18:57:49.327449  260360 out.go:179] * Verifying csi-hostpath-driver addon...
	I1017 18:57:49.331281  260360 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1017 18:57:49.344227  260360 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 18:57:49.344253  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:49.534412  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:49.534561  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:49.835664  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.033819  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:50.034692  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:50.335330  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.533679  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:50.533915  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:50.762960  260360 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1017 18:57:50.763058  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:50.780095  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:50.835452  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.895660  260360 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1017 18:57:50.908459  260360 addons.go:238] Setting addon gcp-auth=true in "addons-379549"
	I1017 18:57:50.908509  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:50.908986  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:50.925597  260360 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1017 18:57:50.925654  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:50.949827  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:51.033059  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:51.033149  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:51.334076  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:51.347836  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:51.533494  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:51.533664  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:51.835365  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:52.033919  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:52.035029  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:52.062144  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.77095881s)
	I1017 18:57:52.062235  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.754064389s)
	W1017 18:57:52.062288  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:52.062323  260360 retry.go:31] will retry after 537.819655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:52.062328  260360 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.136706738s)
	I1017 18:57:52.065554  260360 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:52.068377  260360 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1017 18:57:52.071189  260360 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1017 18:57:52.071225  260360 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1017 18:57:52.085903  260360 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1017 18:57:52.085968  260360 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1017 18:57:52.099764  260360 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 18:57:52.099786  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1017 18:57:52.113693  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 18:57:52.334960  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:52.537724  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:52.537880  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:52.601014  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:52.610638  260360 addons.go:479] Verifying addon gcp-auth=true in "addons-379549"
	I1017 18:57:52.613688  260360 out.go:179] * Verifying gcp-auth addon...
	I1017 18:57:52.616454  260360 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1017 18:57:52.638491  260360 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1017 18:57:52.638519  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:52.835211  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:53.033679  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:53.033985  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:53.119992  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:53.334740  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:53.459391  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:53.459423  260360 retry.go:31] will retry after 709.24434ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:53.532370  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:53.532740  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:53.619385  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:53.834485  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:53.848165  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:54.032353  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:54.033102  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:54.119839  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:54.169211  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:54.334285  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:54.535599  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:54.536172  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:54.620960  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:54.834963  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:54.968996  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:54.969033  260360 retry.go:31] will retry after 1.014713465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:55.034099  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.034462  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:55.119731  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:55.335072  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:55.532475  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.532594  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:55.619821  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:55.834802  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:55.984931  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:56.034117  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:56.035015  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:56.120044  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:56.334245  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:56.348953  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:56.535857  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:56.536417  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:56.620509  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:56.800655  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:56.800689  260360 retry.go:31] will retry after 1.669080544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:56.834473  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:57.032395  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:57.032728  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:57.119411  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:57.334563  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:57.532865  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:57.533106  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:57.620352  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:57.835045  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:58.032291  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:58.032476  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:58.120346  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:58.334485  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:58.350845  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:58.470019  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:58.534220  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:58.534435  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:58.620260  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:58.835151  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:59.033307  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:59.033598  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:59.120222  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:59.257943  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:59.257974  260360 retry.go:31] will retry after 1.734205979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:59.334882  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:59.532850  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:59.533569  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:59.619549  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:59.834462  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.057961  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:00.058088  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:00.123238  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:00.335769  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:00.352149  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:00.533308  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:00.533760  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:00.619335  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:00.834581  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.992908  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:01.033669  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:01.034497  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:01.119622  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:01.337123  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:01.532743  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:01.533005  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:01.619583  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:58:01.820844  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:01.820925  260360 retry.go:31] will retry after 1.458897537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:01.834881  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:02.033335  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:02.033648  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:02.119264  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:02.335039  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:02.533750  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:02.533954  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:02.619514  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:02.834609  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:02.847344  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:03.033608  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:03.033895  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:03.119839  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:03.280014  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:03.334463  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:03.533579  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:03.533797  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:03.619825  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:03.834438  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:04.035208  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:04.035762  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 18:58:04.091732  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:04.091821  260360 retry.go:31] will retry after 6.060894765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:04.119972  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:04.334878  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:04.533578  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:04.533893  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:04.634324  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:04.834490  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:04.847474  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:05.032910  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:05.033096  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:05.120136  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:05.335062  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:05.533236  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:05.533446  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:05.621937  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:05.835114  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:06.033242  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:06.033639  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:06.119452  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:06.334579  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:06.532440  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:06.532818  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:06.619500  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:06.834567  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:07.032904  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:07.033084  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:07.119885  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:07.335033  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:07.347830  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:07.532895  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:07.532970  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:07.619928  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:07.834896  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:08.034425  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:08.034533  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:08.119974  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:08.334778  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:08.533400  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:08.533669  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:08.619700  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:08.834553  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:09.032807  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:09.033119  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:09.119672  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:09.334660  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:09.532132  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:09.532198  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:09.619532  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:09.834318  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:09.847826  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:10.037838  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:10.038507  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:10.119435  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:10.153516  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:10.335120  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:10.534505  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:10.534682  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:10.619489  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:10.834700  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:10.967512  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:10.967544  260360 retry.go:31] will retry after 6.543256703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:11.032449  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:11.032849  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:11.119749  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:11.334849  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:11.532396  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:11.532654  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:11.619320  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:11.834213  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:12.032899  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:12.032979  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:12.119788  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:12.335348  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:12.346908  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:12.533150  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:12.533580  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:12.619315  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:12.834280  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:13.032251  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:13.032489  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:13.120335  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:13.334151  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:13.532667  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:13.532969  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:13.619719  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:13.834565  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:14.033201  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:14.033501  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:14.120123  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:14.334783  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:14.347445  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:14.533095  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:14.533139  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:14.619797  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:14.834757  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:15.033954  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:15.034017  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:15.119611  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:15.334523  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:15.532875  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:15.533202  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:15.620082  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:15.835395  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:16.033055  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:16.033708  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:16.119276  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:16.334163  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:16.533384  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:16.533571  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:16.620004  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:16.834856  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:16.847580  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:17.032905  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:17.032986  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:17.119607  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:17.335517  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:17.511553  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:17.534161  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:17.534938  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:17.620143  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:17.834680  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.034280  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:18.034852  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:18.119713  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:58:18.319519  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:18.319550  260360 retry.go:31] will retry after 5.014946963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:18.334614  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.532468  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:18.532611  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:18.619358  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:18.834618  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:19.032763  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:19.032926  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:19.119618  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:19.334581  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:19.347264  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:19.532247  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:19.532574  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:19.619248  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:19.834240  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:20.032945  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:20.033406  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:20.120183  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:20.334128  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:20.532993  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:20.533505  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:20.619252  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:20.834341  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:21.032710  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:21.032874  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:21.125286  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:21.334428  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:21.532372  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:21.532513  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:21.619323  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:21.834732  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:21.847477  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:22.032858  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:22.033039  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:22.120415  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:22.334523  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:22.532157  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:22.532303  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:22.620159  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:22.834257  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:23.032535  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:23.032931  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:23.119598  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:23.334777  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:23.334900  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:23.534064  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:23.534168  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:23.620434  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:23.834689  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:23.847732  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:24.033397  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:24.034241  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:24.138262  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:58:24.206124  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:24.206151  260360 retry.go:31] will retry after 21.566932522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:24.364465  260360 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 18:58:24.364556  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:24.409298  260360 node_ready.go:49] node "addons-379549" is "Ready"
	I1017 18:58:24.409375  260360 node_ready.go:38] duration metric: took 39.56510904s for node "addons-379549" to be "Ready" ...
	I1017 18:58:24.409415  260360 api_server.go:52] waiting for apiserver process to appear ...
	I1017 18:58:24.409516  260360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 18:58:24.430912  260360 api_server.go:72] duration metric: took 41.793620969s to wait for apiserver process to appear ...
	I1017 18:58:24.430984  260360 api_server.go:88] waiting for apiserver healthz status ...
	I1017 18:58:24.431018  260360 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 18:58:24.439764  260360 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 18:58:24.448797  260360 api_server.go:141] control plane version: v1.34.1
	I1017 18:58:24.448836  260360 api_server.go:131] duration metric: took 17.825154ms to wait for apiserver health ...
	I1017 18:58:24.448846  260360 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 18:58:24.486646  260360 system_pods.go:59] 19 kube-system pods found
	I1017 18:58:24.486766  260360 system_pods.go:61] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Pending
	I1017 18:58:24.486809  260360 system_pods.go:61] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending
	I1017 18:58:24.486836  260360 system_pods.go:61] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending
	I1017 18:58:24.486858  260360 system_pods.go:61] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending
	I1017 18:58:24.486890  260360 system_pods.go:61] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:24.486915  260360 system_pods.go:61] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:24.486942  260360 system_pods.go:61] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:24.486979  260360 system_pods.go:61] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:24.487012  260360 system_pods.go:61] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending
	I1017 18:58:24.487033  260360 system_pods.go:61] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:24.487069  260360 system_pods.go:61] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:24.487105  260360 system_pods.go:61] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:24.487127  260360 system_pods.go:61] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending
	I1017 18:58:24.487167  260360 system_pods.go:61] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending
	I1017 18:58:24.487187  260360 system_pods.go:61] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending
	I1017 18:58:24.487209  260360 system_pods.go:61] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending
	I1017 18:58:24.487250  260360 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending
	I1017 18:58:24.487273  260360 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:24.487313  260360 system_pods.go:61] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:58:24.487339  260360 system_pods.go:74] duration metric: took 38.485342ms to wait for pod list to return data ...
	I1017 18:58:24.487369  260360 default_sa.go:34] waiting for default service account to be created ...
	I1017 18:58:24.565520  260360 default_sa.go:45] found service account: "default"
	I1017 18:58:24.565543  260360 default_sa.go:55] duration metric: took 78.143447ms for default service account to be created ...
	I1017 18:58:24.565553  260360 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 18:58:24.584678  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:24.584781  260360 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 18:58:24.584789  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:24.655731  260360 system_pods.go:86] 19 kube-system pods found
	I1017 18:58:24.655809  260360 system_pods.go:89] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Pending
	I1017 18:58:24.655834  260360 system_pods.go:89] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:58:24.655857  260360 system_pods.go:89] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending
	I1017 18:58:24.655891  260360 system_pods.go:89] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending
	I1017 18:58:24.655915  260360 system_pods.go:89] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:24.655936  260360 system_pods.go:89] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:24.655971  260360 system_pods.go:89] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:24.655995  260360 system_pods.go:89] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:24.656014  260360 system_pods.go:89] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending
	I1017 18:58:24.656047  260360 system_pods.go:89] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:24.656070  260360 system_pods.go:89] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:24.656091  260360 system_pods.go:89] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:24.656110  260360 system_pods.go:89] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending
	I1017 18:58:24.656147  260360 system_pods.go:89] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending
	I1017 18:58:24.656164  260360 system_pods.go:89] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending
	I1017 18:58:24.656184  260360 system_pods.go:89] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending
	I1017 18:58:24.656214  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending
	I1017 18:58:24.656242  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:24.656266  260360 system_pods.go:89] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:58:24.656311  260360 retry.go:31] will retry after 256.846359ms: missing components: kube-dns
	I1017 18:58:24.667537  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:24.840424  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:24.923607  260360 system_pods.go:86] 19 kube-system pods found
	I1017 18:58:24.923695  260360 system_pods.go:89] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:58:24.923721  260360 system_pods.go:89] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:58:24.923760  260360 system_pods.go:89] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending
	I1017 18:58:24.923783  260360 system_pods.go:89] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending
	I1017 18:58:24.923801  260360 system_pods.go:89] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:24.923823  260360 system_pods.go:89] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:24.923856  260360 system_pods.go:89] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:24.923880  260360 system_pods.go:89] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:24.923901  260360 system_pods.go:89] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending
	I1017 18:58:24.923942  260360 system_pods.go:89] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:24.923966  260360 system_pods.go:89] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:24.923996  260360 system_pods.go:89] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:24.924031  260360 system_pods.go:89] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:58:24.924056  260360 system_pods.go:89] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:58:24.924079  260360 system_pods.go:89] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:58:24.924117  260360 system_pods.go:89] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending
	I1017 18:58:24.924145  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:24.924168  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:24.924206  260360 system_pods.go:89] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:58:24.924243  260360 retry.go:31] will retry after 287.083262ms: missing components: kube-dns
	I1017 18:58:25.041469  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:25.048448  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:25.121138  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:25.218607  260360 system_pods.go:86] 19 kube-system pods found
	I1017 18:58:25.218641  260360 system_pods.go:89] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:58:25.218650  260360 system_pods.go:89] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:58:25.218658  260360 system_pods.go:89] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:58:25.218665  260360 system_pods.go:89] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:58:25.218677  260360 system_pods.go:89] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:25.218682  260360 system_pods.go:89] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:25.218696  260360 system_pods.go:89] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:25.218701  260360 system_pods.go:89] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:25.218708  260360 system_pods.go:89] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:58:25.218715  260360 system_pods.go:89] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:25.218720  260360 system_pods.go:89] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:25.218729  260360 system_pods.go:89] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:25.218739  260360 system_pods.go:89] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:58:25.218747  260360 system_pods.go:89] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:58:25.218753  260360 system_pods.go:89] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:58:25.218759  260360 system_pods.go:89] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:58:25.218765  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:25.218772  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:25.218778  260360 system_pods.go:89] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:58:25.218792  260360 retry.go:31] will retry after 366.873436ms: missing components: kube-dns
	I1017 18:58:25.335677  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:25.535050  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:25.535376  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:25.590424  260360 system_pods.go:86] 19 kube-system pods found
	I1017 18:58:25.590466  260360 system_pods.go:89] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:58:25.590476  260360 system_pods.go:89] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:58:25.590483  260360 system_pods.go:89] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:58:25.590491  260360 system_pods.go:89] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:58:25.590496  260360 system_pods.go:89] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:25.590502  260360 system_pods.go:89] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:25.590511  260360 system_pods.go:89] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:25.590516  260360 system_pods.go:89] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:25.590525  260360 system_pods.go:89] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:58:25.590529  260360 system_pods.go:89] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:25.590534  260360 system_pods.go:89] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:25.590547  260360 system_pods.go:89] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:25.590554  260360 system_pods.go:89] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:58:25.590566  260360 system_pods.go:89] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:58:25.590576  260360 system_pods.go:89] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:58:25.590591  260360 system_pods.go:89] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:58:25.590598  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:25.590608  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:25.590616  260360 system_pods.go:89] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:58:25.590631  260360 retry.go:31] will retry after 450.765843ms: missing components: kube-dns
	I1017 18:58:25.619533  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:25.864117  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:26.034379  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:26.035377  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:26.046914  260360 system_pods.go:86] 19 kube-system pods found
	I1017 18:58:26.046950  260360 system_pods.go:89] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Running
	I1017 18:58:26.046961  260360 system_pods.go:89] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:58:26.046969  260360 system_pods.go:89] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:58:26.046978  260360 system_pods.go:89] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:58:26.046985  260360 system_pods.go:89] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:26.046990  260360 system_pods.go:89] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:26.046995  260360 system_pods.go:89] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:26.047000  260360 system_pods.go:89] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:26.047006  260360 system_pods.go:89] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:58:26.047011  260360 system_pods.go:89] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:26.047021  260360 system_pods.go:89] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:26.047028  260360 system_pods.go:89] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:26.047038  260360 system_pods.go:89] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:58:26.047045  260360 system_pods.go:89] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:58:26.047052  260360 system_pods.go:89] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:58:26.047065  260360 system_pods.go:89] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:58:26.047072  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:26.047083  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:26.047088  260360 system_pods.go:89] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Running
	I1017 18:58:26.047099  260360 system_pods.go:126] duration metric: took 1.481540846s to wait for k8s-apps to be running ...
	I1017 18:58:26.047115  260360 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 18:58:26.047170  260360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 18:58:26.063784  260360 system_svc.go:56] duration metric: took 16.644894ms WaitForService to wait for kubelet
	I1017 18:58:26.063867  260360 kubeadm.go:586] duration metric: took 43.426580127s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 18:58:26.063902  260360 node_conditions.go:102] verifying NodePressure condition ...
	I1017 18:58:26.067012  260360 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 18:58:26.067046  260360 node_conditions.go:123] node cpu capacity is 2
	I1017 18:58:26.067060  260360 node_conditions.go:105] duration metric: took 3.125218ms to run NodePressure ...
	I1017 18:58:26.067075  260360 start.go:241] waiting for startup goroutines ...
	I1017 18:58:26.135031  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:26.334252  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:26.533539  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:26.533825  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:26.620115  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:26.848210  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:27.033484  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:27.033712  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:27.119336  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:27.334885  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:27.535178  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:27.535681  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:27.619810  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:27.840660  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:28.033767  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:28.034388  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:28.134334  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:28.335707  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:28.534265  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:28.534641  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:28.619925  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:28.849796  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:29.037617  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:29.038573  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:29.137926  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:29.336505  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:29.534567  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:29.535021  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:29.620048  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:29.835689  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:30.039405  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:30.039852  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:30.120077  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:30.337317  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:30.536182  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:30.536545  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:30.619327  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:30.834993  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:31.033813  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:31.034060  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:31.133809  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:31.339835  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:31.538379  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:31.538716  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:31.619766  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:31.835507  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:32.033967  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:32.034090  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:32.120294  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:32.334983  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:32.532961  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:32.533130  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:32.620838  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:32.835384  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:33.033897  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:33.034489  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:33.119374  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:33.334816  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:33.533914  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:33.534087  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:33.620178  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:33.834217  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:34.033893  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:34.034279  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:34.119883  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:34.335525  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:34.532605  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:34.533181  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:34.620048  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:34.834261  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:35.033300  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:35.033523  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:35.119416  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:35.335967  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:35.534336  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:35.534990  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:35.620399  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:35.834967  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:36.034609  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:36.034931  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:36.120594  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:36.335121  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:36.534093  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:36.534309  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:36.620314  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:36.835600  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:37.035582  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:37.035863  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:37.119972  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:37.335368  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:37.534418  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:37.534718  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:37.619926  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:37.835961  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:38.034339  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:38.035020  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:38.120018  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:38.335912  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:38.532649  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:38.533898  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:38.620305  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:38.835669  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:39.034249  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:39.034522  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:39.134227  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:39.335565  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:39.533451  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:39.533741  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:39.620427  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:39.835237  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:40.056051  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:40.056588  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:40.149214  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:40.336085  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:40.534204  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:40.534663  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:40.619928  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:40.835824  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:41.034289  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:41.034769  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:41.119679  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:41.335892  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:41.534924  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:41.535334  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:41.620123  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:41.834172  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:42.035699  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:42.037403  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:42.119897  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:42.335507  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:42.534568  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:42.538824  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:42.639081  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:42.835244  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:43.033471  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:43.033622  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:43.119313  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:43.334694  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:43.534439  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:43.534556  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:43.619416  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:43.834248  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:44.035932  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:44.036479  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:44.119513  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:44.335163  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:44.534009  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:44.534438  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:44.619321  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:44.834884  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:45.047934  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:45.047953  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:45.122529  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:45.337881  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:45.535501  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:45.536015  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:45.619997  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:45.773288  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:45.834887  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:46.033628  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:46.033804  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:46.119888  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:46.335799  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:46.533988  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:46.534255  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:46.619552  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:46.835244  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:46.897801  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.124464489s)
	W1017 18:58:46.897839  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:46.897865  260360 retry.go:31] will retry after 17.010967715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:47.035030  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:47.035298  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:47.119880  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:47.337610  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:47.534533  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:47.535042  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:47.633891  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:47.835286  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:48.034126  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:48.034295  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:48.120425  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:48.334615  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:48.533864  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:48.534274  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:48.621469  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:48.835356  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:49.036735  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:49.036973  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:49.120299  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:49.335200  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:49.536721  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:49.537361  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:49.620233  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:49.835958  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:50.034494  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:50.034714  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:50.119438  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:50.334783  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:50.533674  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:50.534129  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:50.620309  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:50.835036  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:51.034365  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:51.034790  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:51.120236  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:51.334943  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:51.538332  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:51.546203  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:51.621959  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:51.835528  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:52.033706  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:52.034500  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:52.119220  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:52.334327  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:52.534132  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:52.534341  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:52.634091  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:52.835661  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:53.043729  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:53.044129  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:53.141082  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:53.335711  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:53.534498  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:53.534927  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:53.620106  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:53.834682  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:54.034604  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:54.034863  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:54.134888  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:54.335098  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:54.533978  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:54.534934  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:54.619632  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:54.834647  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:55.033545  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:55.034210  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:55.119936  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:55.335283  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:55.533563  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:55.534755  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:55.620201  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:55.835361  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:56.037987  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:56.038712  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:56.119758  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:56.335967  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:56.533773  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:56.534591  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:56.619763  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:56.836299  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:57.034884  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:57.035450  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:57.119601  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:57.337730  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:57.536699  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:57.537131  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:57.620775  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:57.835719  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:58.033314  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:58.033933  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:58.120495  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:58.335432  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:58.533106  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:58.534061  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:58.619636  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:58.835244  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:59.035221  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:59.035629  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:59.120185  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:59.335790  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:59.533292  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:59.534460  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:59.634054  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:59.837281  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:00.048795  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:00.048908  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:00.120540  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:00.336246  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:00.534552  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:00.535152  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:00.621428  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:00.837307  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:01.034610  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:01.034752  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:01.119530  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:01.335616  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:01.533999  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:01.534124  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:01.624980  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:01.835837  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:02.035175  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:02.035418  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:02.120189  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:02.335135  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:02.534385  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:02.534817  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:02.619774  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:02.835034  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:03.033265  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:03.033409  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:03.119310  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:03.334901  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:03.533796  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:03.534174  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:03.620775  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:03.836058  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:03.909332  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:59:04.034972  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:04.035388  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:04.120645  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:04.335912  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:04.534354  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:04.534967  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:04.619523  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:04.835788  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:05.037159  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:05.037536  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:05.052088  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.14271416s)
	W1017 18:59:05.052128  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 18:59:05.052206  260360 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1017 18:59:05.120152  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:05.334580  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:05.534835  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:05.534979  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:05.619560  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:05.835283  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:06.034655  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:06.035115  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:06.120334  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:06.335159  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:06.534383  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:06.534904  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:06.620114  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:06.835777  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:07.034234  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:07.034545  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:07.119599  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:07.335234  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:07.532969  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:07.533659  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:07.619385  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:07.835348  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:08.036905  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:08.037168  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:08.123838  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:08.336431  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:08.535624  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:08.536080  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:08.620782  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:08.836043  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:09.034594  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:09.035107  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:09.128814  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:09.341301  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:09.533519  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:09.533852  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:09.619598  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:09.834933  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:10.034996  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:10.035409  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:10.120558  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:10.334995  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:10.533086  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:10.533448  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:10.619501  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:10.835031  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:11.032777  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:11.032832  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:11.124720  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:11.335357  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:11.533677  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:11.533836  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:11.619637  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:11.838891  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:12.037250  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:12.037440  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:12.119268  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:12.335860  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:12.533926  260360 kapi.go:107] duration metric: took 1m23.504796959s to wait for kubernetes.io/minikube-addons=registry ...
	I1017 18:59:12.534253  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:12.619930  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:12.835949  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:13.034643  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:13.119767  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:13.338319  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:13.532872  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:13.619824  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:13.836277  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:14.032735  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:14.121718  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:14.335910  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:14.533340  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:14.620134  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:14.835468  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:15.034396  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:15.120386  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:15.335373  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:15.532830  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:15.620527  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:15.835836  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:16.033272  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:16.120195  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:16.334395  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:16.535760  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:16.637762  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:16.835792  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:17.033180  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:17.120208  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:17.335361  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:17.532493  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:17.619971  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:17.835713  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:18.032991  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:18.119881  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:18.335726  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:18.533023  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:18.620140  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:18.838004  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:19.033403  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:19.133142  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:19.334342  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:19.532661  260360 kapi.go:107] duration metric: took 1m30.503478233s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1017 18:59:19.619702  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:19.836043  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:20.119995  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:20.425623  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:20.619788  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:20.836360  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:21.120884  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:21.335752  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:21.626657  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:21.836266  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:22.119509  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:22.335001  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:22.619943  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:22.836151  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:23.121698  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:23.338646  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:23.619286  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:23.835349  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:24.121448  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:24.336454  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:24.620325  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:24.835783  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:25.120369  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:25.335352  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:25.619886  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:25.841843  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:26.120383  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:26.335022  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:26.622480  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:26.835340  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:27.119858  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:27.335601  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:27.619533  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:27.837109  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:28.121035  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:28.339181  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:28.620905  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:28.835084  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:29.119771  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:29.335255  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:29.620392  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:29.834830  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:30.119872  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:30.335405  260360 kapi.go:107] duration metric: took 1m41.004120887s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1017 18:59:30.620277  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:31.120820  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:31.621008  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:32.119854  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:32.620613  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:33.120134  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:33.620117  260360 kapi.go:107] duration metric: took 1m41.00366236s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1017 18:59:33.637668  260360 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-379549 cluster.
	I1017 18:59:33.652134  260360 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1017 18:59:33.662273  260360 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1017 18:59:33.669721  260360 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1017 18:59:33.671195  260360 addons.go:514] duration metric: took 1m51.033396982s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1017 18:59:33.671251  260360 start.go:246] waiting for cluster config update ...
	I1017 18:59:33.671271  260360 start.go:255] writing updated cluster config ...
	I1017 18:59:33.671570  260360 ssh_runner.go:195] Run: rm -f paused
	I1017 18:59:33.675968  260360 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 18:59:33.679424  260360 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cdn2p" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.684198  260360 pod_ready.go:94] pod "coredns-66bc5c9577-cdn2p" is "Ready"
	I1017 18:59:33.684227  260360 pod_ready.go:86] duration metric: took 4.779107ms for pod "coredns-66bc5c9577-cdn2p" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.686802  260360 pod_ready.go:83] waiting for pod "etcd-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.691629  260360 pod_ready.go:94] pod "etcd-addons-379549" is "Ready"
	I1017 18:59:33.691657  260360 pod_ready.go:86] duration metric: took 4.827213ms for pod "etcd-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.694355  260360 pod_ready.go:83] waiting for pod "kube-apiserver-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.699110  260360 pod_ready.go:94] pod "kube-apiserver-addons-379549" is "Ready"
	I1017 18:59:33.699143  260360 pod_ready.go:86] duration metric: took 4.761639ms for pod "kube-apiserver-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.701516  260360 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:34.080314  260360 pod_ready.go:94] pod "kube-controller-manager-addons-379549" is "Ready"
	I1017 18:59:34.080343  260360 pod_ready.go:86] duration metric: took 378.800183ms for pod "kube-controller-manager-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:34.280710  260360 pod_ready.go:83] waiting for pod "kube-proxy-9fnkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:34.680139  260360 pod_ready.go:94] pod "kube-proxy-9fnkd" is "Ready"
	I1017 18:59:34.680164  260360 pod_ready.go:86] duration metric: took 399.422879ms for pod "kube-proxy-9fnkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:34.880504  260360 pod_ready.go:83] waiting for pod "kube-scheduler-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:35.279822  260360 pod_ready.go:94] pod "kube-scheduler-addons-379549" is "Ready"
	I1017 18:59:35.279855  260360 pod_ready.go:86] duration metric: took 399.256483ms for pod "kube-scheduler-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:35.279869  260360 pod_ready.go:40] duration metric: took 1.603866957s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 18:59:35.347507  260360 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 18:59:35.348983  260360 out.go:179] * Done! kubectl is now configured to use "addons-379549" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:02:37 addons-379549 crio[833]: time="2025-10-17T19:02:37.576081433Z" level=info msg="Removed container 2d00bd8021c27a3586f430f55e3144a87832275fa6014ba4671fc3a59d4303fc: kube-system/registry-creds-764b6fb674-v5s46/registry-creds" id=b5fff322-6531-490b-bc60-f4fa3c22e551 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.020258977Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-x46cm/POD" id=0608baf2-bba0-48f7-aa53-50208db89746 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.020364049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.042426915Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-x46cm Namespace:default ID:a64aaf0247afdaaee03bc46820c454a77ebdc5458146003ed5326d81e570127e UID:f9061eb8-5415-4b80-86bb-73486ec69897 NetNS:/var/run/netns/a49c49b2-3e30-4f7b-8dcc-14c38f1bbb58 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001f10d68}] Aliases:map[]}"
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.042615405Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-x46cm to CNI network \"kindnet\" (type=ptp)"
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.061182137Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-x46cm Namespace:default ID:a64aaf0247afdaaee03bc46820c454a77ebdc5458146003ed5326d81e570127e UID:f9061eb8-5415-4b80-86bb-73486ec69897 NetNS:/var/run/netns/a49c49b2-3e30-4f7b-8dcc-14c38f1bbb58 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001f10d68}] Aliases:map[]}"
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.061361159Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-x46cm for CNI network kindnet (type=ptp)"
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.073021389Z" level=info msg="Ran pod sandbox a64aaf0247afdaaee03bc46820c454a77ebdc5458146003ed5326d81e570127e with infra container: default/hello-world-app-5d498dc89-x46cm/POD" id=0608baf2-bba0-48f7-aa53-50208db89746 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.074486384Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b0b22dc4-3f8b-4dd8-af71-29c7cf671544 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.074838143Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=b0b22dc4-3f8b-4dd8-af71-29c7cf671544 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.074964309Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=b0b22dc4-3f8b-4dd8-af71-29c7cf671544 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.076778544Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=d1dbbc98-9955-42b8-b317-8d32465bb2eb name=/runtime.v1.ImageService/PullImage
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.078411148Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.731488232Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=d1dbbc98-9955-42b8-b317-8d32465bb2eb name=/runtime.v1.ImageService/PullImage
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.732114895Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d6f98608-34bb-47e5-9054-fae58dc36540 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.736203675Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e6746668-a786-46b1-ba97-a808d772aa68 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.744916191Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-x46cm/hello-world-app" id=d74963a4-1d78-4daa-8284-20d22a0310d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.746242458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.759135452Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.759350125Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ecaa61358983dce5a74a503c9b401fd93c298275ecaab43d5650ae9f8c499c27/merged/etc/passwd: no such file or directory"
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.759383494Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ecaa61358983dce5a74a503c9b401fd93c298275ecaab43d5650ae9f8c499c27/merged/etc/group: no such file or directory"
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.759696165Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.779224614Z" level=info msg="Created container fc907c999d9239098ff8404ff6a28d9f5249e1ee5baa3fe88b70e2e891be9b8a: default/hello-world-app-5d498dc89-x46cm/hello-world-app" id=d74963a4-1d78-4daa-8284-20d22a0310d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.781073605Z" level=info msg="Starting container: fc907c999d9239098ff8404ff6a28d9f5249e1ee5baa3fe88b70e2e891be9b8a" id=5850f026-dd55-43ce-867f-31a2a836a01a name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:02:41 addons-379549 crio[833]: time="2025-10-17T19:02:41.789704614Z" level=info msg="Started container" PID=7249 containerID=fc907c999d9239098ff8404ff6a28d9f5249e1ee5baa3fe88b70e2e891be9b8a description=default/hello-world-app-5d498dc89-x46cm/hello-world-app id=5850f026-dd55-43ce-867f-31a2a836a01a name=/runtime.v1.RuntimeService/StartContainer sandboxID=a64aaf0247afdaaee03bc46820c454a77ebdc5458146003ed5326d81e570127e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	fc907c999d923       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   a64aaf0247afd       hello-world-app-5d498dc89-x46cm             default
	6e4c93523e66e       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             5 seconds ago            Exited              registry-creds                           1                   ca258827d8e9b       registry-creds-764b6fb674-v5s46             kube-system
	9c9815411a727       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   96a3ee6842e3a       nginx                                       default
	1e748d09ef99c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   9bc5f8b2117c1       busybox                                     default
	33c887c51c977       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   e132ef3e844e7       gcp-auth-78565c9fb4-4z5sp                   gcp-auth
	5cf24bffa8a4a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	80799fb75c916       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	6fde7d0006c1a       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	92b113c7cfe79       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	5651bbb1546ea       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	68de827b898e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   a831395674642       gadget-9vfvf                                gadget
	7964e74b18162       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   40a4d10244fcd       ingress-nginx-controller-675c5ddd98-qx9b8   ingress-nginx
	85fd1c198568a       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   99d17967d35b9       local-path-provisioner-648f6765c9-mtqnt     local-path-storage
	b06455475d2b3       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   8b1b6ae624127       registry-proxy-q985d                        kube-system
	ce48b4c920d81       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   df2754c682823       snapshot-controller-7d9fbc56b8-8j5lv        kube-system
	accf4579f8250       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   76cff2858c800       kube-ingress-dns-minikube                   kube-system
	fb1f7d0e065d8       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	a9161ab91cb06       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   dcd74e1c65f91       yakd-dashboard-5ff678cb9-pk6pq              yakd-dashboard
	3986728e63c14       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   4b982a81e247a       csi-hostpath-resizer-0                      kube-system
	88eee337e7ec6       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   0aab0381cd387       nvidia-device-plugin-daemonset-5tz6p        kube-system
	287b90d2b10db       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago            Exited              patch                                    2                   46d75b61673b8       ingress-nginx-admission-patch-5dn9f         ingress-nginx
	8e63327d94af6       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   3afa7f090868b       cloud-spanner-emulator-86bd5cbb97-9vn6g     default
	012db353f99b6       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   7071a6ec8434e       csi-hostpath-attacher-0                     kube-system
	9361ebb005625       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   d015ae841fce6       snapshot-controller-7d9fbc56b8-ctqmz        kube-system
	de5165e5bfa9f       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   c53f2241eb7ca       registry-6b586f9694-lggv9                   kube-system
	5b8f14f3c7ff8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              create                                   0                   5b96bd60502c6       ingress-nginx-admission-create-x76j5        ingress-nginx
	37d41037f4ee9       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   03639054dada5       metrics-server-85b7d694d7-kx9vs             kube-system
	c83ac4cff13e7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   bc5f262ce206c       storage-provisioner                         kube-system
	70437ef145370       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   01efd9fefe6c7       coredns-66bc5c9577-cdn2p                    kube-system
	0c926298efaa6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   c08fd3909fd19       kindnet-2gclq                               kube-system
	ad27f04cf6a14       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   065a4b9c92fcd       kube-proxy-9fnkd                            kube-system
	22a266e5672ab       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   1c1968ed28531       kube-controller-manager-addons-379549       kube-system
	beb0486de70d8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   d89a9b4a4fa6e       etcd-addons-379549                          kube-system
	04fd09957b07c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   427e79f9576b5       kube-scheduler-addons-379549                kube-system
	612fc65e5e866       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   9b93e367ae672       kube-apiserver-addons-379549                kube-system
	
	
	==> coredns [70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b] <==
	[INFO] 10.244.0.16:54869 - 38920 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002294561s
	[INFO] 10.244.0.16:54869 - 20198 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000233922s
	[INFO] 10.244.0.16:54869 - 61943 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000170867s
	[INFO] 10.244.0.16:57141 - 12248 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00018553s
	[INFO] 10.244.0.16:57141 - 11787 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000248584s
	[INFO] 10.244.0.16:41176 - 19456 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117921s
	[INFO] 10.244.0.16:41176 - 19259 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090623s
	[INFO] 10.244.0.16:47833 - 39241 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093536s
	[INFO] 10.244.0.16:47833 - 39068 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092174s
	[INFO] 10.244.0.16:43188 - 11912 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001837301s
	[INFO] 10.244.0.16:43188 - 11732 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.008709334s
	[INFO] 10.244.0.16:45159 - 12007 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196623s
	[INFO] 10.244.0.16:45159 - 11670 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000331856s
	[INFO] 10.244.0.21:57605 - 18808 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0001991s
	[INFO] 10.244.0.21:37288 - 39638 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0002848s
	[INFO] 10.244.0.21:45241 - 9411 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139648s
	[INFO] 10.244.0.21:34258 - 48519 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000233101s
	[INFO] 10.244.0.21:58372 - 59605 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125207s
	[INFO] 10.244.0.21:32826 - 33803 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000225495s
	[INFO] 10.244.0.21:44498 - 8327 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004329806s
	[INFO] 10.244.0.21:38031 - 33619 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004695557s
	[INFO] 10.244.0.21:54542 - 4722 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003169147s
	[INFO] 10.244.0.21:37871 - 8745 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003311191s
	[INFO] 10.244.0.24:40100 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000193561s
	[INFO] 10.244.0.24:55812 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164811s
	
	
	==> describe nodes <==
	Name:               addons-379549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-379549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=addons-379549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T18_57_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-379549
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-379549"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 18:57:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-379549
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:02:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:02:23 +0000   Fri, 17 Oct 2025 18:57:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:02:23 +0000   Fri, 17 Oct 2025 18:57:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:02:23 +0000   Fri, 17 Oct 2025 18:57:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:02:23 +0000   Fri, 17 Oct 2025 18:58:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-379549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b47e3782-9a4d-4307-bd31-a9c8af0ab3fc
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     cloud-spanner-emulator-86bd5cbb97-9vn6g      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  default                     hello-world-app-5d498dc89-x46cm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-9vfvf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  gcp-auth                    gcp-auth-78565c9fb4-4z5sp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-qx9b8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m54s
	  kube-system                 coredns-66bc5c9577-cdn2p                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 csi-hostpathplugin-dnj9h                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 etcd-addons-379549                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m5s
	  kube-system                 kindnet-2gclq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m
	  kube-system                 kube-apiserver-addons-379549                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-controller-manager-addons-379549        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-9fnkd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-scheduler-addons-379549                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 metrics-server-85b7d694d7-kx9vs              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m54s
	  kube-system                 nvidia-device-plugin-daemonset-5tz6p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 registry-6b586f9694-lggv9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 registry-creds-764b6fb674-v5s46              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 registry-proxy-q985d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 snapshot-controller-7d9fbc56b8-8j5lv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 snapshot-controller-7d9fbc56b8-ctqmz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  local-path-storage          local-path-provisioner-648f6765c9-mtqnt      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-pk6pq               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m58s                  kube-proxy       
	  Normal   Starting                 5m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m12s (x8 over 5m12s)  kubelet          Node addons-379549 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m12s (x8 over 5m12s)  kubelet          Node addons-379549 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m12s (x8 over 5m12s)  kubelet          Node addons-379549 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m5s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m5s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m5s                   kubelet          Node addons-379549 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m5s                   kubelet          Node addons-379549 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m5s                   kubelet          Node addons-379549 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m1s                   node-controller  Node addons-379549 event: Registered Node addons-379549 in Controller
	  Normal   NodeReady                4m18s                  kubelet          Node addons-379549 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct17 18:19] overlayfs: idmapped layers are currently not supported
	[Oct17 18:20] overlayfs: idmapped layers are currently not supported
	[ +27.630815] overlayfs: idmapped layers are currently not supported
	[ +17.813448] overlayfs: idmapped layers are currently not supported
	[Oct17 18:24] overlayfs: idmapped layers are currently not supported
	[ +30.872028] overlayfs: idmapped layers are currently not supported
	[Oct17 18:25] overlayfs: idmapped layers are currently not supported
	[Oct17 18:27] overlayfs: idmapped layers are currently not supported
	[Oct17 18:29] overlayfs: idmapped layers are currently not supported
	[Oct17 18:30] overlayfs: idmapped layers are currently not supported
	[Oct17 18:31] overlayfs: idmapped layers are currently not supported
	[  +9.357480] overlayfs: idmapped layers are currently not supported
	[Oct17 18:33] overlayfs: idmapped layers are currently not supported
	[  +5.779853] overlayfs: idmapped layers are currently not supported
	[Oct17 18:34] overlayfs: idmapped layers are currently not supported
	[Oct17 18:35] overlayfs: idmapped layers are currently not supported
	[Oct17 18:36] overlayfs: idmapped layers are currently not supported
	[ +20.850590] overlayfs: idmapped layers are currently not supported
	[Oct17 18:38] overlayfs: idmapped layers are currently not supported
	[ +19.812679] overlayfs: idmapped layers are currently not supported
	[Oct17 18:39] overlayfs: idmapped layers are currently not supported
	[ +19.225178] overlayfs: idmapped layers are currently not supported
	[Oct17 18:40] overlayfs: idmapped layers are currently not supported
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 18:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02] <==
	{"level":"warn","ts":"2025-10-17T18:57:33.358513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.373157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.388755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.411822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.422265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.439273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.458057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.478681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.493239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.513351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.531673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.545406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.564041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.577147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.601684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.623779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.646521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.656111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.747307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:49.688731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:49.705081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:58:11.733119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:58:11.747376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:58:11.780093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:58:11.788547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44944","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [33c887c51c9775153c4f58b08791de8b5bcd6c2887c892fe45a78af221c928fd] <==
	2025/10/17 18:59:32 GCP Auth Webhook started!
	2025/10/17 18:59:35 Ready to marshal response ...
	2025/10/17 18:59:35 Ready to write response ...
	2025/10/17 18:59:36 Ready to marshal response ...
	2025/10/17 18:59:36 Ready to write response ...
	2025/10/17 18:59:36 Ready to marshal response ...
	2025/10/17 18:59:36 Ready to write response ...
	2025/10/17 18:59:56 Ready to marshal response ...
	2025/10/17 18:59:56 Ready to write response ...
	2025/10/17 18:59:57 Ready to marshal response ...
	2025/10/17 18:59:57 Ready to write response ...
	2025/10/17 19:00:20 Ready to marshal response ...
	2025/10/17 19:00:20 Ready to write response ...
	2025/10/17 19:00:26 Ready to marshal response ...
	2025/10/17 19:00:26 Ready to write response ...
	2025/10/17 19:00:48 Ready to marshal response ...
	2025/10/17 19:00:48 Ready to write response ...
	2025/10/17 19:00:48 Ready to marshal response ...
	2025/10/17 19:00:48 Ready to write response ...
	2025/10/17 19:00:56 Ready to marshal response ...
	2025/10/17 19:00:56 Ready to write response ...
	2025/10/17 19:02:40 Ready to marshal response ...
	2025/10/17 19:02:40 Ready to write response ...
	
	
	==> kernel <==
	 19:02:43 up  1:45,  0 user,  load average: 0.39, 1.00, 1.34
	Linux addons-379549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02] <==
	I1017 19:00:33.716262       1 main.go:301] handling current node
	I1017 19:00:43.716596       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:00:43.716748       1 main.go:301] handling current node
	I1017 19:00:53.716237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:00:53.716271       1 main.go:301] handling current node
	I1017 19:01:03.718304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:03.718343       1 main.go:301] handling current node
	I1017 19:01:13.716208       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:13.716238       1 main.go:301] handling current node
	I1017 19:01:23.723497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:23.723611       1 main.go:301] handling current node
	I1017 19:01:33.719251       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:33.719283       1 main.go:301] handling current node
	I1017 19:01:43.723205       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:43.723318       1 main.go:301] handling current node
	I1017 19:01:53.723197       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:01:53.723231       1 main.go:301] handling current node
	I1017 19:02:03.723849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:02:03.723963       1 main.go:301] handling current node
	I1017 19:02:13.724646       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:02:13.724679       1 main.go:301] handling current node
	I1017 19:02:23.715897       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:02:23.716041       1 main.go:301] handling current node
	I1017 19:02:33.724589       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:02:33.724622       1 main.go:301] handling current node
	
	
	==> kube-apiserver [612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583] <==
	W1017 18:58:11.747274       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 18:58:11.774244       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1017 18:58:11.788338       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1017 18:58:24.198864       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.157.28:443: connect: connection refused
	E1017 18:58:24.199008       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.157.28:443: connect: connection refused" logger="UnhandledError"
	W1017 18:58:24.199543       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.157.28:443: connect: connection refused
	E1017 18:58:24.199627       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.157.28:443: connect: connection refused" logger="UnhandledError"
	W1017 18:58:24.265259       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.157.28:443: connect: connection refused
	E1017 18:58:24.265318       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.157.28:443: connect: connection refused" logger="UnhandledError"
	W1017 18:58:39.900743       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 18:58:39.900812       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1017 18:58:39.901775       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.6.125:443: connect: connection refused" logger="UnhandledError"
	E1017 18:58:39.902294       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.6.125:443: connect: connection refused" logger="UnhandledError"
	E1017 18:58:39.908674       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.6.125:443: connect: connection refused" logger="UnhandledError"
	E1017 18:58:39.929939       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.6.125:443: connect: connection refused" logger="UnhandledError"
	I1017 18:58:40.107464       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1017 19:00:09.771334       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1017 19:00:11.461735       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1017 19:00:19.905320       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1017 19:00:20.237137       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.148.203"}
	E1017 19:00:34.550100       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1017 19:02:40.858872       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.136.64"}
	
	
	==> kube-controller-manager [22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569] <==
	I1017 18:57:41.724356       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 18:57:41.724371       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 18:57:41.725470       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 18:57:41.725563       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 18:57:41.728674       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 18:57:41.739102       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 18:57:41.745369       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 18:57:41.748575       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 18:57:41.748625       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 18:57:41.749734       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 18:57:41.750865       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 18:57:41.750907       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 18:57:41.750958       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 18:57:41.754367       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 18:57:41.755555       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 18:57:41.758125       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	E1017 18:57:48.072022       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1017 18:58:11.726017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1017 18:58:11.726189       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1017 18:58:11.726252       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1017 18:58:11.761823       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1017 18:58:11.765940       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1017 18:58:11.826702       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 18:58:11.867698       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 18:58:26.679271       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001] <==
	I1017 18:57:43.700036       1 server_linux.go:53] "Using iptables proxy"
	I1017 18:57:43.825351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 18:57:43.926061       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 18:57:43.926100       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 18:57:43.926178       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 18:57:43.980396       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 18:57:43.980452       1 server_linux.go:132] "Using iptables Proxier"
	I1017 18:57:43.984272       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 18:57:43.984570       1 server.go:527] "Version info" version="v1.34.1"
	I1017 18:57:43.984586       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 18:57:43.989847       1 config.go:200] "Starting service config controller"
	I1017 18:57:43.989885       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 18:57:43.989916       1 config.go:106] "Starting endpoint slice config controller"
	I1017 18:57:43.989921       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 18:57:43.989937       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 18:57:43.989941       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 18:57:43.990703       1 config.go:309] "Starting node config controller"
	I1017 18:57:43.990717       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 18:57:43.990724       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 18:57:44.090045       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 18:57:44.090086       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 18:57:44.090147       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2] <==
	I1017 18:57:35.877549       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 18:57:35.879841       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 18:57:35.879918       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 18:57:35.880880       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 18:57:35.880942       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 18:57:35.891557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 18:57:35.891780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 18:57:35.891862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 18:57:35.892606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 18:57:35.897236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 18:57:35.897419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 18:57:35.897473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 18:57:35.897560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 18:57:35.897634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 18:57:35.897685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 18:57:35.897751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 18:57:35.897787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 18:57:35.897817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 18:57:35.897874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 18:57:35.897929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 18:57:35.897967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 18:57:35.898058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 18:57:35.898097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 18:57:35.898893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1017 18:57:37.180719       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:00:58 addons-379549 kubelet[1304]: I1017 19:00:58.855838    1304 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/323de4cd-20d6-4157-ba68-0e4a688db059-gcp-creds\") on node \"addons-379549\" DevicePath \"\""
	Oct 17 19:00:59 addons-379549 kubelet[1304]: I1017 19:00:59.635296    1304 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f823f1b3b88f02d2f4039b2c5416077cc36e25bf3ea7d77e991d58402b41671"
	Oct 17 19:00:59 addons-379549 kubelet[1304]: E1017 19:00:59.637195    1304 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-5684922c-aed9-497d-9bbf-0e02c327a0d2\" is forbidden: User \"system:node:addons-379549\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-379549' and this object" podUID="323de4cd-20d6-4157-ba68-0e4a688db059" pod="local-path-storage/helper-pod-delete-pvc-5684922c-aed9-497d-9bbf-0e02c327a0d2"
	Oct 17 19:01:01 addons-379549 kubelet[1304]: I1017 19:01:01.482345    1304 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323de4cd-20d6-4157-ba68-0e4a688db059" path="/var/lib/kubelet/pods/323de4cd-20d6-4157-ba68-0e4a688db059/volumes"
	Oct 17 19:01:25 addons-379549 kubelet[1304]: I1017 19:01:25.481852    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5tz6p" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:01:37 addons-379549 kubelet[1304]: I1017 19:01:37.492649    1304 scope.go:117] "RemoveContainer" containerID="c3f1e6a5dec9ef5d43354d874bef79add229619d859940d5affd00de192858db"
	Oct 17 19:01:37 addons-379549 kubelet[1304]: I1017 19:01:37.504766    1304 scope.go:117] "RemoveContainer" containerID="745cfacf49b5b11d3bf69c05345607f0a49392d7975f361bd96e40f6d35c4fb0"
	Oct 17 19:01:37 addons-379549 kubelet[1304]: I1017 19:01:37.514131    1304 scope.go:117] "RemoveContainer" containerID="aca2c4f4ae60c00a1b4118b1b80c1cd930e8519d0e1eeb76741ba534c70a95c5"
	Oct 17 19:01:37 addons-379549 kubelet[1304]: E1017 19:01:37.555743    1304 manager.go:1116] Failed to create existing container: /crio-7f823f1b3b88f02d2f4039b2c5416077cc36e25bf3ea7d77e991d58402b41671: Error finding container 7f823f1b3b88f02d2f4039b2c5416077cc36e25bf3ea7d77e991d58402b41671: Status 404 returned error can't find the container with id 7f823f1b3b88f02d2f4039b2c5416077cc36e25bf3ea7d77e991d58402b41671
	Oct 17 19:01:37 addons-379549 kubelet[1304]: E1017 19:01:37.556038    1304 manager.go:1116] Failed to create existing container: /docker/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/crio-417529f307046ad89b301b3422bef44381a18eaaaae2fc5e6b72f1d0b0f3e6d6: Error finding container 417529f307046ad89b301b3422bef44381a18eaaaae2fc5e6b72f1d0b0f3e6d6: Status 404 returned error can't find the container with id 417529f307046ad89b301b3422bef44381a18eaaaae2fc5e6b72f1d0b0f3e6d6
	Oct 17 19:01:37 addons-379549 kubelet[1304]: E1017 19:01:37.556271    1304 manager.go:1116] Failed to create existing container: /crio-417529f307046ad89b301b3422bef44381a18eaaaae2fc5e6b72f1d0b0f3e6d6: Error finding container 417529f307046ad89b301b3422bef44381a18eaaaae2fc5e6b72f1d0b0f3e6d6: Status 404 returned error can't find the container with id 417529f307046ad89b301b3422bef44381a18eaaaae2fc5e6b72f1d0b0f3e6d6
	Oct 17 19:01:51 addons-379549 kubelet[1304]: I1017 19:01:51.478937    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q985d" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:01:56 addons-379549 kubelet[1304]: I1017 19:01:56.478671    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lggv9" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:02:34 addons-379549 kubelet[1304]: I1017 19:02:34.483261    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-v5s46" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:02:34 addons-379549 kubelet[1304]: W1017 19:02:34.506347    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/crio-ca258827d8e9b865af67f5d71763ac7a67d7a5e7958335b4d8ef4e3d9d34df82 WatchSource:0}: Error finding container ca258827d8e9b865af67f5d71763ac7a67d7a5e7958335b4d8ef4e3d9d34df82: Status 404 returned error can't find the container with id ca258827d8e9b865af67f5d71763ac7a67d7a5e7958335b4d8ef4e3d9d34df82
	Oct 17 19:02:36 addons-379549 kubelet[1304]: I1017 19:02:36.979940    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-v5s46" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:02:36 addons-379549 kubelet[1304]: I1017 19:02:36.980456    1304 scope.go:117] "RemoveContainer" containerID="2d00bd8021c27a3586f430f55e3144a87832275fa6014ba4671fc3a59d4303fc"
	Oct 17 19:02:37 addons-379549 kubelet[1304]: I1017 19:02:37.558512    1304 scope.go:117] "RemoveContainer" containerID="2d00bd8021c27a3586f430f55e3144a87832275fa6014ba4671fc3a59d4303fc"
	Oct 17 19:02:37 addons-379549 kubelet[1304]: I1017 19:02:37.986555    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-v5s46" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:02:37 addons-379549 kubelet[1304]: I1017 19:02:37.986623    1304 scope.go:117] "RemoveContainer" containerID="6e4c93523e66eeccf2ff2824b7532d530bfcd1fbf71d70c9e10ab81a77d117f5"
	Oct 17 19:02:37 addons-379549 kubelet[1304]: E1017 19:02:37.986774    1304 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-v5s46_kube-system(26e0457e-0841-4658-b957-473746bb21d1)\"" pod="kube-system/registry-creds-764b6fb674-v5s46" podUID="26e0457e-0841-4658-b957-473746bb21d1"
	Oct 17 19:02:39 addons-379549 kubelet[1304]: I1017 19:02:39.478305    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5tz6p" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:02:40 addons-379549 kubelet[1304]: I1017 19:02:40.820426    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f9061eb8-5415-4b80-86bb-73486ec69897-gcp-creds\") pod \"hello-world-app-5d498dc89-x46cm\" (UID: \"f9061eb8-5415-4b80-86bb-73486ec69897\") " pod="default/hello-world-app-5d498dc89-x46cm"
	Oct 17 19:02:40 addons-379549 kubelet[1304]: I1017 19:02:40.821035    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz4qq\" (UniqueName: \"kubernetes.io/projected/f9061eb8-5415-4b80-86bb-73486ec69897-kube-api-access-wz4qq\") pod \"hello-world-app-5d498dc89-x46cm\" (UID: \"f9061eb8-5415-4b80-86bb-73486ec69897\") " pod="default/hello-world-app-5d498dc89-x46cm"
	Oct 17 19:02:41 addons-379549 kubelet[1304]: W1017 19:02:41.073254    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/crio-a64aaf0247afdaaee03bc46820c454a77ebdc5458146003ed5326d81e570127e WatchSource:0}: Error finding container a64aaf0247afdaaee03bc46820c454a77ebdc5458146003ed5326d81e570127e: Status 404 returned error can't find the container with id a64aaf0247afdaaee03bc46820c454a77ebdc5458146003ed5326d81e570127e
	
	
	==> storage-provisioner [c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11] <==
	W1017 19:02:18.588244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:20.591244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:20.595423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:22.598362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:22.602369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:24.605246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:24.609566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:26.613579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:26.619034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:28.622466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:28.626248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:30.629511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:30.634402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:32.638071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:32.648097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:34.651657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:34.657663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:36.661007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:36.665553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:38.670099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:38.677425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:40.696220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:40.764247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:42.767029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:02:42.774332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-379549 -n addons-379549
helpers_test.go:269: (dbg) Run:  kubectl --context addons-379549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-x76j5 ingress-nginx-admission-patch-5dn9f
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-379549 describe pod ingress-nginx-admission-create-x76j5 ingress-nginx-admission-patch-5dn9f
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-379549 describe pod ingress-nginx-admission-create-x76j5 ingress-nginx-admission-patch-5dn9f: exit status 1 (117.158554ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-x76j5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5dn9f" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-379549 describe pod ingress-nginx-admission-create-x76j5 ingress-nginx-admission-patch-5dn9f: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (268.232371ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:02:44.235274  270067 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:02:44.236726  270067 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:02:44.236780  270067 out.go:374] Setting ErrFile to fd 2...
	I1017 19:02:44.236805  270067 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:02:44.237118  270067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:02:44.237468  270067 mustload.go:65] Loading cluster: addons-379549
	I1017 19:02:44.237886  270067 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:02:44.237932  270067 addons.go:606] checking whether the cluster is paused
	I1017 19:02:44.238061  270067 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:02:44.238102  270067 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:02:44.238575  270067 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:02:44.255442  270067 ssh_runner.go:195] Run: systemctl --version
	I1017 19:02:44.255513  270067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:02:44.273790  270067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:02:44.375069  270067 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:02:44.375148  270067 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:02:44.405824  270067 cri.go:89] found id: "6e4c93523e66eeccf2ff2824b7532d530bfcd1fbf71d70c9e10ab81a77d117f5"
	I1017 19:02:44.405843  270067 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:02:44.405848  270067 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:02:44.405852  270067 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:02:44.405855  270067 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:02:44.405859  270067 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:02:44.405862  270067 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:02:44.405864  270067 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:02:44.405867  270067 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:02:44.405879  270067 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:02:44.405882  270067 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:02:44.405886  270067 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:02:44.405889  270067 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:02:44.405892  270067 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:02:44.405896  270067 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:02:44.405907  270067 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:02:44.405910  270067 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:02:44.405915  270067 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:02:44.405918  270067 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:02:44.405921  270067 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:02:44.405925  270067 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:02:44.405928  270067 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:02:44.405931  270067 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:02:44.405934  270067 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:02:44.405937  270067 cri.go:89] found id: ""
	I1017 19:02:44.405987  270067 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:02:44.421890  270067 out.go:203] 
	W1017 19:02:44.424854  270067 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:02:44.424888  270067 out.go:285] * 
	* 
	W1017 19:02:44.430870  270067 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:02:44.433786  270067 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable ingress --alsologtostderr -v=1: exit status 11 (276.908646ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:02:44.500012  270110 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:02:44.500735  270110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:02:44.500751  270110 out.go:374] Setting ErrFile to fd 2...
	I1017 19:02:44.500757  270110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:02:44.501010  270110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:02:44.501300  270110 mustload.go:65] Loading cluster: addons-379549
	I1017 19:02:44.501665  270110 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:02:44.501683  270110 addons.go:606] checking whether the cluster is paused
	I1017 19:02:44.501786  270110 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:02:44.501810  270110 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:02:44.502271  270110 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:02:44.526485  270110 ssh_runner.go:195] Run: systemctl --version
	I1017 19:02:44.526548  270110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:02:44.545143  270110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:02:44.651178  270110 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:02:44.651271  270110 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:02:44.683742  270110 cri.go:89] found id: "6e4c93523e66eeccf2ff2824b7532d530bfcd1fbf71d70c9e10ab81a77d117f5"
	I1017 19:02:44.683765  270110 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:02:44.683770  270110 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:02:44.683774  270110 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:02:44.683777  270110 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:02:44.683780  270110 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:02:44.683783  270110 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:02:44.683786  270110 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:02:44.683789  270110 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:02:44.683796  270110 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:02:44.683800  270110 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:02:44.683803  270110 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:02:44.683806  270110 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:02:44.683816  270110 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:02:44.683821  270110 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:02:44.683830  270110 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:02:44.683842  270110 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:02:44.683847  270110 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:02:44.683854  270110 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:02:44.683857  270110 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:02:44.683862  270110 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:02:44.683865  270110 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:02:44.683867  270110 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:02:44.683871  270110 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:02:44.683874  270110 cri.go:89] found id: ""
	I1017 19:02:44.683925  270110 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:02:44.698679  270110 out.go:203] 
	W1017 19:02:44.701594  270110 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:02:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:02:44.701637  270110 out.go:285] * 
	* 
	W1017 19:02:44.707717  270110 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:02:44.710748  270110 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9vfvf" [17f7db54-4d2a-4065-b931-4da8f494d8e4] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004018039s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (259.420331ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:19.365695  267689 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:19.366602  267689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:19.366616  267689 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:19.366621  267689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:19.366906  267689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:00:19.367225  267689 mustload.go:65] Loading cluster: addons-379549
	I1017 19:00:19.367624  267689 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:19.367644  267689 addons.go:606] checking whether the cluster is paused
	I1017 19:00:19.367779  267689 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:19.367813  267689 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:00:19.368277  267689 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:00:19.386870  267689 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:19.386936  267689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:00:19.404019  267689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:00:19.511266  267689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:19.511353  267689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:19.543807  267689 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:00:19.543827  267689 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:00:19.543832  267689 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:00:19.543835  267689 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:00:19.543839  267689 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:00:19.543843  267689 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:00:19.543847  267689 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:00:19.543850  267689 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:00:19.543853  267689 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:00:19.543867  267689 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:00:19.543870  267689 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:00:19.543873  267689 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:00:19.543876  267689 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:00:19.543879  267689 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:00:19.543882  267689 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:00:19.543887  267689 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:00:19.543890  267689 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:00:19.543895  267689 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:00:19.543898  267689 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:00:19.543901  267689 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:00:19.543905  267689 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:00:19.543908  267689 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:00:19.543911  267689 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:00:19.543914  267689 cri.go:89] found id: ""
	I1017 19:00:19.543965  267689 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:19.558912  267689 out.go:203] 
	W1017 19:00:19.561928  267689 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:19.561952  267689 out.go:285] * 
	* 
	W1017 19:00:19.567837  267689 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:19.570857  267689 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.318516ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007875826s
addons_test.go:463: (dbg) Run:  kubectl --context addons-379549 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (273.744293ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:13.094097  267590 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:13.094873  267590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:13.094915  267590 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:13.094941  267590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:13.095232  267590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:00:13.095576  267590 mustload.go:65] Loading cluster: addons-379549
	I1017 19:00:13.096007  267590 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:13.096050  267590 addons.go:606] checking whether the cluster is paused
	I1017 19:00:13.096179  267590 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:13.096221  267590 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:00:13.096746  267590 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:00:13.114936  267590 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:13.114997  267590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:00:13.132724  267590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:00:13.240557  267590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:13.240740  267590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:13.276503  267590 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:00:13.276567  267590 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:00:13.276573  267590 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:00:13.276577  267590 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:00:13.276584  267590 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:00:13.276588  267590 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:00:13.276595  267590 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:00:13.276599  267590 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:00:13.276603  267590 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:00:13.276609  267590 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:00:13.276617  267590 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:00:13.276621  267590 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:00:13.276703  267590 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:00:13.276711  267590 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:00:13.276718  267590 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:00:13.276729  267590 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:00:13.276733  267590 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:00:13.276737  267590 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:00:13.276741  267590 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:00:13.276744  267590 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:00:13.276750  267590 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:00:13.276762  267590 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:00:13.276765  267590 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:00:13.276768  267590 cri.go:89] found id: ""
	I1017 19:00:13.276842  267590 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:13.293289  267590 out.go:203] 
	W1017 19:00:13.296253  267590 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:13.296277  267590 out.go:285] * 
	* 
	W1017 19:00:13.303088  267590 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:13.306209  267590 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1017 18:59:49.280045  259596 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1017 18:59:49.284023  259596 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1017 18:59:49.284049  259596 kapi.go:107] duration metric: took 4.019767ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.030327ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-379549 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-379549 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [cec83d8b-a963-4216-84cd-d55818c91459] Pending
helpers_test.go:352: "task-pv-pod" [cec83d8b-a963-4216-84cd-d55818c91459] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [cec83d8b-a963-4216-84cd-d55818c91459] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.002946567s
addons_test.go:572: (dbg) Run:  kubectl --context addons-379549 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-379549 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-379549 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-379549 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-379549 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-379549 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-379549 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [44d8b277-6daa-41c4-8a11-459557b65cdd] Pending
helpers_test.go:352: "task-pv-pod-restore" [44d8b277-6daa-41c4-8a11-459557b65cdd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [44d8b277-6daa-41c4-8a11-459557b65cdd] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003091264s
addons_test.go:614: (dbg) Run:  kubectl --context addons-379549 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-379549 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-379549 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (275.216436ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:34.994090  268304 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:34.994666  268304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:34.994681  268304 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:34.994686  268304 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:34.994981  268304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:00:34.995267  268304 mustload.go:65] Loading cluster: addons-379549
	I1017 19:00:34.995649  268304 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:34.995666  268304 addons.go:606] checking whether the cluster is paused
	I1017 19:00:34.995766  268304 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:34.995786  268304 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:00:34.996238  268304 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:00:35.018501  268304 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:35.018567  268304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:00:35.039232  268304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:00:35.147018  268304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:35.147097  268304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:35.178798  268304 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:00:35.178822  268304 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:00:35.178827  268304 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:00:35.178831  268304 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:00:35.178835  268304 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:00:35.178838  268304 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:00:35.178842  268304 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:00:35.178846  268304 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:00:35.178849  268304 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:00:35.178855  268304 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:00:35.178859  268304 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:00:35.178862  268304 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:00:35.178865  268304 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:00:35.178868  268304 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:00:35.178872  268304 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:00:35.178882  268304 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:00:35.178885  268304 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:00:35.178890  268304 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:00:35.178893  268304 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:00:35.178896  268304 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:00:35.178904  268304 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:00:35.178911  268304 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:00:35.178915  268304 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:00:35.178918  268304 cri.go:89] found id: ""
	I1017 19:00:35.178971  268304 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:35.193376  268304 out.go:203] 
	W1017 19:00:35.194732  268304 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:35.194756  268304 out.go:285] * 
	* 
	W1017 19:00:35.200920  268304 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:35.202442  268304 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (257.720533ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:35.252218  268346 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:35.252953  268346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:35.252997  268346 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:35.253019  268346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:35.253312  268346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:00:35.253690  268346 mustload.go:65] Loading cluster: addons-379549
	I1017 19:00:35.254153  268346 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:35.254198  268346 addons.go:606] checking whether the cluster is paused
	I1017 19:00:35.254331  268346 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:35.254371  268346 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:00:35.254849  268346 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:00:35.274103  268346 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:35.274292  268346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:00:35.292967  268346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:00:35.399974  268346 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:35.400108  268346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:35.437418  268346 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:00:35.437439  268346 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:00:35.437444  268346 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:00:35.437447  268346 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:00:35.437451  268346 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:00:35.437454  268346 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:00:35.437457  268346 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:00:35.437461  268346 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:00:35.437464  268346 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:00:35.437474  268346 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:00:35.437478  268346 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:00:35.437481  268346 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:00:35.437484  268346 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:00:35.437487  268346 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:00:35.437490  268346 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:00:35.437497  268346 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:00:35.437500  268346 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:00:35.437505  268346 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:00:35.437508  268346 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:00:35.437511  268346 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:00:35.437516  268346 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:00:35.437519  268346 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:00:35.437522  268346 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:00:35.437524  268346 cri.go:89] found id: ""
	I1017 19:00:35.437581  268346 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:35.451716  268346 out.go:203] 
	W1017 19:00:35.452899  268346 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:35.452928  268346 out.go:285] * 
	* 
	W1017 19:00:35.459035  268346 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:35.460409  268346 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (46.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-379549 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-379549 --alsologtostderr -v=1: exit status 11 (257.346183ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:46.107561  266565 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:46.108286  266565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:46.108300  266565 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:46.108305  266565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:46.108640  266565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 18:59:46.108959  266565 mustload.go:65] Loading cluster: addons-379549
	I1017 18:59:46.109322  266565 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:46.109339  266565 addons.go:606] checking whether the cluster is paused
	I1017 18:59:46.109484  266565 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:46.109508  266565 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:59:46.109963  266565 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:59:46.127065  266565 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:46.127135  266565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:59:46.143446  266565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:59:46.246890  266565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:46.246969  266565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:46.276606  266565 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 18:59:46.276628  266565 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 18:59:46.276633  266565 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 18:59:46.276647  266565 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 18:59:46.276652  266565 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 18:59:46.276656  266565 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 18:59:46.276675  266565 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 18:59:46.276683  266565 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 18:59:46.276687  266565 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 18:59:46.276710  266565 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 18:59:46.276714  266565 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 18:59:46.276717  266565 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 18:59:46.276720  266565 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 18:59:46.276731  266565 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 18:59:46.276735  266565 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 18:59:46.276751  266565 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 18:59:46.276760  266565 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 18:59:46.276765  266565 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 18:59:46.276780  266565 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 18:59:46.276786  266565 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 18:59:46.276792  266565 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 18:59:46.276798  266565 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 18:59:46.276801  266565 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 18:59:46.276804  266565 cri.go:89] found id: ""
	I1017 18:59:46.276868  266565 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:46.292285  266565 out.go:203] 
	W1017 18:59:46.295270  266565 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:46.295306  266565 out.go:285] * 
	* 
	W1017 18:59:46.301293  266565 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:46.304179  266565 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-379549 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-379549
helpers_test.go:243: (dbg) docker inspect addons-379549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66",
	        "Created": "2025-10-17T18:57:12.179689816Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 260760,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T18:57:12.241795967Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/hostname",
	        "HostsPath": "/var/lib/docker/containers/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/hosts",
	        "LogPath": "/var/lib/docker/containers/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66/55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66-json.log",
	        "Name": "/addons-379549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-379549:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-379549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55fec2c4916f9dad039fe64a881991db0345ca7e5cbc7415c8368965be03ba66",
	                "LowerDir": "/var/lib/docker/overlay2/3e4eb3a0f914e87e9420aea224c0e4dea59ac71baf8770cf39cdb3283a5258ee-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3e4eb3a0f914e87e9420aea224c0e4dea59ac71baf8770cf39cdb3283a5258ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3e4eb3a0f914e87e9420aea224c0e4dea59ac71baf8770cf39cdb3283a5258ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3e4eb3a0f914e87e9420aea224c0e4dea59ac71baf8770cf39cdb3283a5258ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-379549",
	                "Source": "/var/lib/docker/volumes/addons-379549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-379549",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-379549",
	                "name.minikube.sigs.k8s.io": "addons-379549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f0e0e97287944811fc96deec392fc47351a9a255038b63627692f47b83a8471",
	            "SandboxKey": "/var/run/docker/netns/2f0e0e972879",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-379549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:fd:d3:66:0f:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3b67371b301eeb2c9b0127b37d48aff81f3b763f5b36ea0e3cc33c895a80c6ed",
	                    "EndpointID": "959438c556ae4a71d046ca098ea53ba78c0c756e8bb3adc2770022e46ed75775",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-379549",
	                        "55fec2c4916f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-379549 -n addons-379549
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-379549 logs -n 25: (1.455782551s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-068460 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-068460   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-068460                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-068460   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-290584 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-290584   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-290584                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-290584   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-068460                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-068460   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-290584                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-290584   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ --download-only -p download-docker-786214 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-786214 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ -p download-docker-786214                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-786214 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ --download-only -p binary-mirror-789835 --alsologtostderr --binary-mirror http://127.0.0.1:35757 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-789835   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-789835                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-789835   │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ addons  │ enable dashboard -p addons-379549                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-379549                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-379549 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:59 UTC │
	│ addons  │ addons-379549 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-379549 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-379549 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-379549          │ jenkins │ v1.37.0 │ 17 Oct 25 18:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:45.923022  260360 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:45.923196  260360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:45.923226  260360 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:45.923246  260360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:45.923522  260360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 18:56:45.924012  260360 out.go:368] Setting JSON to false
	I1017 18:56:45.924858  260360 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5957,"bootTime":1760721449,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 18:56:45.924952  260360 start.go:141] virtualization:  
	I1017 18:56:45.928245  260360 out.go:179] * [addons-379549] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 18:56:45.931950  260360 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 18:56:45.932016  260360 notify.go:220] Checking for updates...
	I1017 18:56:45.937881  260360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:45.940878  260360 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 18:56:45.943708  260360 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 18:56:45.946543  260360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 18:56:45.949551  260360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 18:56:45.952713  260360 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:45.978539  260360 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 18:56:45.978728  260360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:46.045112  260360 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-17 18:56:46.035136305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 18:56:46.045223  260360 docker.go:318] overlay module found
	I1017 18:56:46.048318  260360 out.go:179] * Using the docker driver based on user configuration
	I1017 18:56:46.051151  260360 start.go:305] selected driver: docker
	I1017 18:56:46.051174  260360 start.go:925] validating driver "docker" against <nil>
	I1017 18:56:46.051188  260360 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 18:56:46.051879  260360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:46.106558  260360 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-17 18:56:46.097757384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 18:56:46.106725  260360 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:46.106947  260360 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 18:56:46.109907  260360 out.go:179] * Using Docker driver with root privileges
	I1017 18:56:46.112647  260360 cni.go:84] Creating CNI manager for ""
	I1017 18:56:46.112715  260360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:56:46.112728  260360 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 18:56:46.112798  260360 start.go:349] cluster config:
	{Name:addons-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-379549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1017 18:56:46.117672  260360 out.go:179] * Starting "addons-379549" primary control-plane node in "addons-379549" cluster
	I1017 18:56:46.120572  260360 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 18:56:46.123442  260360 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 18:56:46.126318  260360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:46.126438  260360 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 18:56:46.126330  260360 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 18:56:46.126451  260360 cache.go:58] Caching tarball of preloaded images
	I1017 18:56:46.126530  260360 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 18:56:46.126540  260360 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 18:56:46.126884  260360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/config.json ...
	I1017 18:56:46.126915  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/config.json: {Name:mk226279b9a196e1a7ebbe8a74e398252caee8a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:56:46.141959  260360 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 18:56:46.142111  260360 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 18:56:46.142130  260360 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1017 18:56:46.142141  260360 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1017 18:56:46.142150  260360 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1017 18:56:46.142155  260360 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1017 18:57:04.187220  260360 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1017 18:57:04.187256  260360 cache.go:232] Successfully downloaded all kic artifacts
	I1017 18:57:04.187285  260360 start.go:360] acquireMachinesLock for addons-379549: {Name:mka00eef85230c5dd15a7d8abde55ed543d50e6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:57:04.187401  260360 start.go:364] duration metric: took 97.146µs to acquireMachinesLock for "addons-379549"
	I1017 18:57:04.187436  260360 start.go:93] Provisioning new machine with config: &{Name:addons-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-379549 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 18:57:04.187535  260360 start.go:125] createHost starting for "" (driver="docker")
	I1017 18:57:04.191083  260360 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1017 18:57:04.191338  260360 start.go:159] libmachine.API.Create for "addons-379549" (driver="docker")
	I1017 18:57:04.191388  260360 client.go:168] LocalClient.Create starting
	I1017 18:57:04.191540  260360 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem
	I1017 18:57:05.215779  260360 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem
	I1017 18:57:05.364193  260360 cli_runner.go:164] Run: docker network inspect addons-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 18:57:05.380198  260360 cli_runner.go:211] docker network inspect addons-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 18:57:05.380300  260360 network_create.go:284] running [docker network inspect addons-379549] to gather additional debugging logs...
	I1017 18:57:05.380321  260360 cli_runner.go:164] Run: docker network inspect addons-379549
	W1017 18:57:05.395850  260360 cli_runner.go:211] docker network inspect addons-379549 returned with exit code 1
	I1017 18:57:05.395884  260360 network_create.go:287] error running [docker network inspect addons-379549]: docker network inspect addons-379549: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-379549 not found
	I1017 18:57:05.395898  260360 network_create.go:289] output of [docker network inspect addons-379549]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-379549 not found
	
	** /stderr **
	I1017 18:57:05.396013  260360 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 18:57:05.412938  260360 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001be5450}
	I1017 18:57:05.412986  260360 network_create.go:124] attempt to create docker network addons-379549 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1017 18:57:05.413044  260360 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-379549 addons-379549
	I1017 18:57:05.465282  260360 network_create.go:108] docker network addons-379549 192.168.49.0/24 created
	I1017 18:57:05.465325  260360 kic.go:121] calculated static IP "192.168.49.2" for the "addons-379549" container
	I1017 18:57:05.465400  260360 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 18:57:05.481289  260360 cli_runner.go:164] Run: docker volume create addons-379549 --label name.minikube.sigs.k8s.io=addons-379549 --label created_by.minikube.sigs.k8s.io=true
	I1017 18:57:05.498770  260360 oci.go:103] Successfully created a docker volume addons-379549
	I1017 18:57:05.498863  260360 cli_runner.go:164] Run: docker run --rm --name addons-379549-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-379549 --entrypoint /usr/bin/test -v addons-379549:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 18:57:07.611460  260360 cli_runner.go:217] Completed: docker run --rm --name addons-379549-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-379549 --entrypoint /usr/bin/test -v addons-379549:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.112558516s)
	I1017 18:57:07.611492  260360 oci.go:107] Successfully prepared a docker volume addons-379549
	I1017 18:57:07.611540  260360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:57:07.611564  260360 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 18:57:07.611632  260360 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-379549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 18:57:12.102739  260360 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-379549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.491065061s)
	I1017 18:57:12.102773  260360 kic.go:203] duration metric: took 4.491206399s to extract preloaded images to volume ...
	W1017 18:57:12.102956  260360 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 18:57:12.103072  260360 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 18:57:12.164473  260360 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-379549 --name addons-379549 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-379549 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-379549 --network addons-379549 --ip 192.168.49.2 --volume addons-379549:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 18:57:12.473779  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Running}}
	I1017 18:57:12.500020  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:12.518959  260360 cli_runner.go:164] Run: docker exec addons-379549 stat /var/lib/dpkg/alternatives/iptables
	I1017 18:57:12.572662  260360 oci.go:144] the created container "addons-379549" has a running status.
	I1017 18:57:12.572693  260360 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa...
	I1017 18:57:13.655449  260360 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 18:57:13.678029  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:13.699636  260360 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 18:57:13.699657  260360 kic_runner.go:114] Args: [docker exec --privileged addons-379549 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 18:57:13.740667  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:13.756834  260360 machine.go:93] provisionDockerMachine start ...
	I1017 18:57:13.756935  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:13.772867  260360 main.go:141] libmachine: Using SSH client type: native
	I1017 18:57:13.773184  260360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1017 18:57:13.773199  260360 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 18:57:13.915907  260360 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-379549
	
	I1017 18:57:13.915935  260360 ubuntu.go:182] provisioning hostname "addons-379549"
	I1017 18:57:13.915998  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:13.934912  260360 main.go:141] libmachine: Using SSH client type: native
	I1017 18:57:13.935235  260360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1017 18:57:13.935252  260360 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-379549 && echo "addons-379549" | sudo tee /etc/hostname
	I1017 18:57:14.089851  260360 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-379549
	
	I1017 18:57:14.089947  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:14.108002  260360 main.go:141] libmachine: Using SSH client type: native
	I1017 18:57:14.108322  260360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1017 18:57:14.108344  260360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-379549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-379549/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-379549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 18:57:14.252703  260360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 18:57:14.252732  260360 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 18:57:14.252760  260360 ubuntu.go:190] setting up certificates
	I1017 18:57:14.252770  260360 provision.go:84] configureAuth start
	I1017 18:57:14.252843  260360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-379549
	I1017 18:57:14.268726  260360 provision.go:143] copyHostCerts
	I1017 18:57:14.268815  260360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 18:57:14.268948  260360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 18:57:14.269016  260360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 18:57:14.269069  260360 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.addons-379549 san=[127.0.0.1 192.168.49.2 addons-379549 localhost minikube]
	I1017 18:57:14.624117  260360 provision.go:177] copyRemoteCerts
	I1017 18:57:14.624183  260360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 18:57:14.624228  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:14.642148  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:14.748041  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 18:57:14.764641  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 18:57:14.781215  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 18:57:14.798503  260360 provision.go:87] duration metric: took 545.715741ms to configureAuth
	I1017 18:57:14.798530  260360 ubuntu.go:206] setting minikube options for container-runtime
	I1017 18:57:14.798764  260360 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:14.798902  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:14.817109  260360 main.go:141] libmachine: Using SSH client type: native
	I1017 18:57:14.817445  260360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1017 18:57:14.817468  260360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 18:57:15.073484  260360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 18:57:15.073510  260360 machine.go:96] duration metric: took 1.31665209s to provisionDockerMachine
	I1017 18:57:15.073520  260360 client.go:171] duration metric: took 10.882122485s to LocalClient.Create
	I1017 18:57:15.073533  260360 start.go:167] duration metric: took 10.882196115s to libmachine.API.Create "addons-379549"
	I1017 18:57:15.073540  260360 start.go:293] postStartSetup for "addons-379549" (driver="docker")
	I1017 18:57:15.073551  260360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 18:57:15.073682  260360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 18:57:15.073737  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:15.091582  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:15.196744  260360 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 18:57:15.200325  260360 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 18:57:15.200354  260360 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 18:57:15.200367  260360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 18:57:15.200436  260360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 18:57:15.200463  260360 start.go:296] duration metric: took 126.916952ms for postStartSetup
	I1017 18:57:15.200825  260360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-379549
	I1017 18:57:15.217007  260360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/config.json ...
	I1017 18:57:15.217312  260360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 18:57:15.217362  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:15.233525  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:15.333718  260360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 18:57:15.338635  260360 start.go:128] duration metric: took 11.15108393s to createHost
	I1017 18:57:15.338667  260360 start.go:83] releasing machines lock for "addons-379549", held for 11.151249103s
	I1017 18:57:15.338742  260360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-379549
	I1017 18:57:15.355117  260360 ssh_runner.go:195] Run: cat /version.json
	I1017 18:57:15.355169  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:15.355201  260360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 18:57:15.355271  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:15.378592  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:15.380055  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:15.569774  260360 ssh_runner.go:195] Run: systemctl --version
	I1017 18:57:15.575948  260360 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 18:57:15.610338  260360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 18:57:15.614453  260360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 18:57:15.614526  260360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 18:57:15.641529  260360 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 18:57:15.641550  260360 start.go:495] detecting cgroup driver to use...
	I1017 18:57:15.641586  260360 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 18:57:15.641635  260360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 18:57:15.657651  260360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 18:57:15.669534  260360 docker.go:218] disabling cri-docker service (if available) ...
	I1017 18:57:15.669627  260360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 18:57:15.686887  260360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 18:57:15.704885  260360 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 18:57:15.818804  260360 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 18:57:15.943700  260360 docker.go:234] disabling docker service ...
	I1017 18:57:15.943801  260360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 18:57:15.964351  260360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 18:57:15.977403  260360 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 18:57:16.097830  260360 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 18:57:16.216321  260360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 18:57:16.229441  260360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 18:57:16.243627  260360 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 18:57:16.243697  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.252363  260360 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 18:57:16.252437  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.262152  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.270961  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.279816  260360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 18:57:16.288177  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.296775  260360 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.311072  260360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 18:57:16.321422  260360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 18:57:16.330500  260360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 18:57:16.338919  260360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:57:16.465775  260360 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 18:57:16.594833  260360 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 18:57:16.594918  260360 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 18:57:16.598595  260360 start.go:563] Will wait 60s for crictl version
	I1017 18:57:16.598660  260360 ssh_runner.go:195] Run: which crictl
	I1017 18:57:16.602036  260360 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 18:57:16.626335  260360 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 18:57:16.626436  260360 ssh_runner.go:195] Run: crio --version
	I1017 18:57:16.657664  260360 ssh_runner.go:195] Run: crio --version
	I1017 18:57:16.688741  260360 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 18:57:16.691620  260360 cli_runner.go:164] Run: docker network inspect addons-379549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 18:57:16.708111  260360 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 18:57:16.711945  260360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 18:57:16.721686  260360 kubeadm.go:883] updating cluster {Name:addons-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-379549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 18:57:16.721796  260360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:57:16.721853  260360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 18:57:16.753916  260360 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 18:57:16.753938  260360 crio.go:433] Images already preloaded, skipping extraction
	I1017 18:57:16.753999  260360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 18:57:16.786045  260360 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 18:57:16.786121  260360 cache_images.go:85] Images are preloaded, skipping loading
	I1017 18:57:16.786250  260360 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 18:57:16.786382  260360 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-379549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-379549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 18:57:16.786615  260360 ssh_runner.go:195] Run: crio config
	I1017 18:57:16.844979  260360 cni.go:84] Creating CNI manager for ""
	I1017 18:57:16.845019  260360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:57:16.845041  260360 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 18:57:16.845065  260360 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-379549 NodeName:addons-379549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 18:57:16.845217  260360 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-379549"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 18:57:16.845378  260360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 18:57:16.853109  260360 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 18:57:16.853224  260360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 18:57:16.860254  260360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 18:57:16.872683  260360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 18:57:16.885295  260360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1017 18:57:16.897975  260360 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1017 18:57:16.901837  260360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 18:57:16.911393  260360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:57:17.020146  260360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 18:57:17.037001  260360 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549 for IP: 192.168.49.2
	I1017 18:57:17.037078  260360 certs.go:195] generating shared ca certs ...
	I1017 18:57:17.037113  260360 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:17.037336  260360 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 18:57:17.352272  260360 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt ...
	I1017 18:57:17.352303  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt: {Name:mkd0682e9ec696a5dc3c6408bce8c9ab628da2b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:17.352545  260360 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key ...
	I1017 18:57:17.352560  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key: {Name:mk1b70c572c926b863145e313486a5bdd6a8745e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:17.352710  260360 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 18:57:18.438561  260360 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt ...
	I1017 18:57:18.438595  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt: {Name:mk28577ab9371ba91d63d0876a6982d2a222e4b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:18.438796  260360 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key ...
	I1017 18:57:18.438809  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key: {Name:mk29a7d483ab314486849922d4ed3f5ae86198c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:18.438894  260360 certs.go:257] generating profile certs ...
	I1017 18:57:18.438956  260360 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.key
	I1017 18:57:18.438975  260360 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt with IP's: []
	I1017 18:57:19.537712  260360 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt ...
	I1017 18:57:19.537744  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: {Name:mk6eaf62f01188e8fb25b1a3cb3b4a8aafb36db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:19.537939  260360 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.key ...
	I1017 18:57:19.537952  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.key: {Name:mk78bd2ed432cd9cc4b15baaa295e748d5ea633f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:19.538043  260360 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key.29479c62
	I1017 18:57:19.538065  260360 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt.29479c62 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1017 18:57:19.625229  260360 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt.29479c62 ...
	I1017 18:57:19.625258  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt.29479c62: {Name:mk514b25fe2233f248f1fe4ad25a562c05e30f40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:19.625422  260360 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key.29479c62 ...
	I1017 18:57:19.625433  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key.29479c62: {Name:mk3350940796e397f2e3d8e9d43c2a533084a50e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:19.625514  260360 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt.29479c62 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt
	I1017 18:57:19.625587  260360 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key.29479c62 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key
	I1017 18:57:19.625636  260360 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.key
	I1017 18:57:19.625651  260360 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.crt with IP's: []
	I1017 18:57:21.964513  260360 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.crt ...
	I1017 18:57:21.964553  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.crt: {Name:mk6de73cde00b4d1c013607eed0c20a102f7da1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:21.964755  260360 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.key ...
	I1017 18:57:21.964770  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.key: {Name:mk26f3413eac9176bc7d5de7fd6760ef830e1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:21.964962  260360 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 18:57:21.965013  260360 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 18:57:21.965042  260360 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 18:57:21.965069  260360 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 18:57:21.965697  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 18:57:21.984675  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 18:57:22.002685  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 18:57:22.023425  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 18:57:22.042344  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 18:57:22.060851  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 18:57:22.080145  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 18:57:22.099098  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 18:57:22.117661  260360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 18:57:22.135751  260360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 18:57:22.148649  260360 ssh_runner.go:195] Run: openssl version
	I1017 18:57:22.155063  260360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 18:57:22.163352  260360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:57:22.166973  260360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:57:22.167040  260360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 18:57:22.207924  260360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 18:57:22.216202  260360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 18:57:22.219993  260360 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 18:57:22.220045  260360 kubeadm.go:400] StartCluster: {Name:addons-379549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-379549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:57:22.220159  260360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:57:22.220248  260360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:57:22.251333  260360 cri.go:89] found id: ""
	I1017 18:57:22.251450  260360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 18:57:22.259122  260360 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 18:57:22.266820  260360 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 18:57:22.266931  260360 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 18:57:22.274761  260360 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 18:57:22.274782  260360 kubeadm.go:157] found existing configuration files:
	
	I1017 18:57:22.274836  260360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 18:57:22.282682  260360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 18:57:22.282753  260360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 18:57:22.290594  260360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 18:57:22.298666  260360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 18:57:22.298730  260360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 18:57:22.306007  260360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 18:57:22.313971  260360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 18:57:22.314037  260360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 18:57:22.321858  260360 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 18:57:22.330061  260360 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 18:57:22.330124  260360 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 18:57:22.337941  260360 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 18:57:22.379933  260360 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 18:57:22.379996  260360 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 18:57:22.401337  260360 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 18:57:22.401417  260360 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 18:57:22.401459  260360 kubeadm.go:318] OS: Linux
	I1017 18:57:22.401511  260360 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 18:57:22.401566  260360 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 18:57:22.401619  260360 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 18:57:22.401674  260360 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 18:57:22.401729  260360 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 18:57:22.401783  260360 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 18:57:22.401837  260360 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 18:57:22.401892  260360 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 18:57:22.401944  260360 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 18:57:22.469843  260360 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 18:57:22.469989  260360 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 18:57:22.470091  260360 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 18:57:22.480364  260360 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 18:57:22.484497  260360 out.go:252]   - Generating certificates and keys ...
	I1017 18:57:22.484661  260360 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 18:57:22.484758  260360 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 18:57:22.813074  260360 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 18:57:23.742162  260360 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 18:57:24.134338  260360 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 18:57:24.464535  260360 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 18:57:24.727828  260360 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 18:57:24.727963  260360 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-379549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 18:57:25.748111  260360 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 18:57:25.748283  260360 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-379549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 18:57:26.355224  260360 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 18:57:26.623683  260360 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 18:57:26.700096  260360 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 18:57:26.700208  260360 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 18:57:27.242667  260360 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 18:57:27.513928  260360 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 18:57:27.822697  260360 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 18:57:28.439146  260360 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 18:57:28.957866  260360 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 18:57:28.958709  260360 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 18:57:28.961634  260360 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 18:57:28.965127  260360 out.go:252]   - Booting up control plane ...
	I1017 18:57:28.965237  260360 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 18:57:28.965340  260360 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 18:57:28.965974  260360 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 18:57:28.986434  260360 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 18:57:28.986542  260360 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 18:57:28.995297  260360 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 18:57:28.995401  260360 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 18:57:28.995442  260360 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 18:57:29.124004  260360 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 18:57:29.124123  260360 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 18:57:30.624666  260360 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500904511s
	I1017 18:57:30.628265  260360 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 18:57:30.628363  260360 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1017 18:57:30.628456  260360 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 18:57:30.628555  260360 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 18:57:33.282137  260360 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.65328676s
	I1017 18:57:35.891599  260360 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.26328261s
	I1017 18:57:36.631486  260360 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001072811s
	I1017 18:57:36.651809  260360 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 18:57:36.677306  260360 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 18:57:36.691312  260360 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 18:57:36.691537  260360 kubeadm.go:318] [mark-control-plane] Marking the node addons-379549 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 18:57:36.704574  260360 kubeadm.go:318] [bootstrap-token] Using token: aj3xrv.v6mngpc276ee8slz
	I1017 18:57:36.709760  260360 out.go:252]   - Configuring RBAC rules ...
	I1017 18:57:36.709900  260360 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 18:57:36.712548  260360 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 18:57:36.722994  260360 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 18:57:36.732832  260360 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 18:57:36.737911  260360 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 18:57:36.742261  260360 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 18:57:37.037529  260360 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 18:57:37.482923  260360 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 18:57:38.036856  260360 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 18:57:38.038341  260360 kubeadm.go:318] 
	I1017 18:57:38.038419  260360 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 18:57:38.038425  260360 kubeadm.go:318] 
	I1017 18:57:38.038518  260360 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 18:57:38.038526  260360 kubeadm.go:318] 
	I1017 18:57:38.038562  260360 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 18:57:38.038624  260360 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 18:57:38.038694  260360 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 18:57:38.038700  260360 kubeadm.go:318] 
	I1017 18:57:38.038756  260360 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 18:57:38.038761  260360 kubeadm.go:318] 
	I1017 18:57:38.038812  260360 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 18:57:38.038816  260360 kubeadm.go:318] 
	I1017 18:57:38.038870  260360 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 18:57:38.038947  260360 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 18:57:38.039024  260360 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 18:57:38.039030  260360 kubeadm.go:318] 
	I1017 18:57:38.039129  260360 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 18:57:38.039210  260360 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 18:57:38.039215  260360 kubeadm.go:318] 
	I1017 18:57:38.039301  260360 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token aj3xrv.v6mngpc276ee8slz \
	I1017 18:57:38.039407  260360 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 \
	I1017 18:57:38.039428  260360 kubeadm.go:318] 	--control-plane 
	I1017 18:57:38.039432  260360 kubeadm.go:318] 
	I1017 18:57:38.039519  260360 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 18:57:38.039523  260360 kubeadm.go:318] 
	I1017 18:57:38.039607  260360 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token aj3xrv.v6mngpc276ee8slz \
	I1017 18:57:38.039715  260360 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 
	I1017 18:57:38.043863  260360 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 18:57:38.044108  260360 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 18:57:38.044253  260360 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 18:57:38.044291  260360 cni.go:84] Creating CNI manager for ""
	I1017 18:57:38.044301  260360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:57:38.049641  260360 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 18:57:38.052455  260360 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 18:57:38.057351  260360 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 18:57:38.057374  260360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 18:57:38.071848  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 18:57:38.347128  260360 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 18:57:38.347282  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:38.347373  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-379549 minikube.k8s.io/updated_at=2025_10_17T18_57_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=addons-379549 minikube.k8s.io/primary=true
	I1017 18:57:38.497621  260360 ops.go:34] apiserver oom_adj: -16
	I1017 18:57:38.497730  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:38.998231  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:39.498366  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:39.997813  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:40.498405  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:40.998566  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:41.498240  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:41.997931  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:42.498679  260360 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 18:57:42.636269  260360 kubeadm.go:1113] duration metric: took 4.289030384s to wait for elevateKubeSystemPrivileges
	I1017 18:57:42.636303  260360 kubeadm.go:402] duration metric: took 20.416261286s to StartCluster
	I1017 18:57:42.636320  260360 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:42.636454  260360 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 18:57:42.636997  260360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 18:57:42.637251  260360 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 18:57:42.637409  260360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 18:57:42.637729  260360 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:42.637786  260360 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1017 18:57:42.637875  260360 addons.go:69] Setting yakd=true in profile "addons-379549"
	I1017 18:57:42.637895  260360 addons.go:238] Setting addon yakd=true in "addons-379549"
	I1017 18:57:42.637929  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.638711  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.638895  260360 addons.go:69] Setting inspektor-gadget=true in profile "addons-379549"
	I1017 18:57:42.638917  260360 addons.go:238] Setting addon inspektor-gadget=true in "addons-379549"
	I1017 18:57:42.638941  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.639473  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.640058  260360 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-379549"
	I1017 18:57:42.640081  260360 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-379549"
	I1017 18:57:42.640111  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.640566  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.644266  260360 addons.go:69] Setting metrics-server=true in profile "addons-379549"
	I1017 18:57:42.644308  260360 addons.go:238] Setting addon metrics-server=true in "addons-379549"
	I1017 18:57:42.644340  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.645002  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.676553  260360 addons.go:69] Setting cloud-spanner=true in profile "addons-379549"
	I1017 18:57:42.676611  260360 addons.go:238] Setting addon cloud-spanner=true in "addons-379549"
	I1017 18:57:42.676648  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.677369  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.680715  260360 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-379549"
	I1017 18:57:42.680764  260360 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-379549"
	I1017 18:57:42.680813  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.683518  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.692930  260360 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-379549"
	I1017 18:57:42.693028  260360 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-379549"
	I1017 18:57:42.693066  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.693773  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.700589  260360 addons.go:69] Setting registry=true in profile "addons-379549"
	I1017 18:57:42.700631  260360 addons.go:238] Setting addon registry=true in "addons-379549"
	I1017 18:57:42.700679  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.701342  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.709141  260360 addons.go:69] Setting default-storageclass=true in profile "addons-379549"
	I1017 18:57:42.709170  260360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-379549"
	I1017 18:57:42.709539  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.722131  260360 addons.go:69] Setting registry-creds=true in profile "addons-379549"
	I1017 18:57:42.742147  260360 addons.go:238] Setting addon registry-creds=true in "addons-379549"
	I1017 18:57:42.742217  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.742763  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.722777  260360 addons.go:69] Setting storage-provisioner=true in profile "addons-379549"
	I1017 18:57:42.763748  260360 addons.go:238] Setting addon storage-provisioner=true in "addons-379549"
	I1017 18:57:42.763796  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.764363  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.722806  260360 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-379549"
	I1017 18:57:42.777202  260360 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-379549"
	I1017 18:57:42.777592  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.722817  260360 addons.go:69] Setting volcano=true in profile "addons-379549"
	I1017 18:57:42.795928  260360 addons.go:238] Setting addon volcano=true in "addons-379549"
	I1017 18:57:42.795990  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.796559  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.722824  260360 addons.go:69] Setting volumesnapshots=true in profile "addons-379549"
	I1017 18:57:42.811899  260360 addons.go:238] Setting addon volumesnapshots=true in "addons-379549"
	I1017 18:57:42.811942  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.812499  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.723006  260360 out.go:179] * Verifying Kubernetes components...
	I1017 18:57:42.740077  260360 addons.go:69] Setting gcp-auth=true in profile "addons-379549"
	I1017 18:57:42.838928  260360 mustload.go:65] Loading cluster: addons-379549
	I1017 18:57:42.839150  260360 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:57:42.839404  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.850995  260360 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1017 18:57:42.740092  260360 addons.go:69] Setting ingress=true in profile "addons-379549"
	I1017 18:57:42.857859  260360 addons.go:238] Setting addon ingress=true in "addons-379549"
	I1017 18:57:42.857906  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.858365  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.740101  260360 addons.go:69] Setting ingress-dns=true in profile "addons-379549"
	I1017 18:57:42.872223  260360 addons.go:238] Setting addon ingress-dns=true in "addons-379549"
	I1017 18:57:42.872272  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.872789  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.877647  260360 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1017 18:57:42.895294  260360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 18:57:42.901880  260360 out.go:179]   - Using image docker.io/registry:3.0.0
	I1017 18:57:42.928910  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1017 18:57:42.932211  260360 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1017 18:57:42.935642  260360 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1017 18:57:42.935676  260360 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1017 18:57:42.935753  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:42.936121  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1017 18:57:42.939724  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1017 18:57:42.942373  260360 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 18:57:42.942390  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1017 18:57:42.942451  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:42.942642  260360 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1017 18:57:42.942856  260360 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1017 18:57:42.950261  260360 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1017 18:57:42.954140  260360 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1017 18:57:42.954519  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1017 18:57:42.954582  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:42.967692  260360 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1017 18:57:42.975920  260360 addons.go:238] Setting addon default-storageclass=true in "addons-379549"
	I1017 18:57:42.976801  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:42.977174  260360 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1017 18:57:42.977284  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:42.977423  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:42.999843  260360 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1017 18:57:42.975985  260360 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1017 18:57:43.000210  260360 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1017 18:57:43.000294  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:42.976108  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1017 18:57:43.030511  260360 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1017 18:57:43.034981  260360 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 18:57:43.035053  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1017 18:57:43.035148  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.049234  260360 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-379549"
	I1017 18:57:43.049276  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:43.049671  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:43.060566  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1017 18:57:43.064291  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1017 18:57:43.068157  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1017 18:57:43.071471  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1017 18:57:43.072116  260360 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 18:57:43.072140  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1017 18:57:43.072203  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.096418  260360 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1017 18:57:43.100735  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1017 18:57:43.100776  260360 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1017 18:57:43.100954  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.110806  260360 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 18:57:43.111059  260360 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1017 18:57:43.111076  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1017 18:57:43.111150  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	W1017 18:57:43.128988  260360 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1017 18:57:43.149016  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:43.151117  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1017 18:57:43.151135  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1017 18:57:43.151195  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.151222  260360 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 18:57:43.151250  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 18:57:43.151300  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.187593  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.209826  260360 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1017 18:57:43.216725  260360 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:43.216840  260360 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 18:57:43.221029  260360 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 18:57:43.216884  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.216919  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.222310  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.228483  260360 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 18:57:43.228508  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1017 18:57:43.228597  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.260675  260360 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1017 18:57:43.262214  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.267030  260360 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:43.272196  260360 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 18:57:43.272265  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1017 18:57:43.272348  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.293085  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.320719  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.339427  260360 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1017 18:57:43.342544  260360 out.go:179]   - Using image docker.io/busybox:stable
	I1017 18:57:43.347934  260360 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 18:57:43.347958  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1017 18:57:43.348023  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:43.360676  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.373140  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.378959  260360 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 18:57:43.379239  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.380331  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.415639  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	W1017 18:57:43.424895  260360 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 18:57:43.424944  260360 retry.go:31] will retry after 349.95256ms: ssh: handshake failed: EOF
	I1017 18:57:43.443878  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.447985  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.458620  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:43.459135  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	W1017 18:57:43.477370  260360 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 18:57:43.477400  260360 retry.go:31] will retry after 355.396394ms: ssh: handshake failed: EOF
	I1017 18:57:43.537319  260360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1017 18:57:43.776328  260360 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 18:57:43.776372  260360 retry.go:31] will retry after 257.451368ms: ssh: handshake failed: EOF
	I1017 18:57:44.071228  260360 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1017 18:57:44.071252  260360 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1017 18:57:44.075109  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1017 18:57:44.082129  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 18:57:44.091952  260360 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1017 18:57:44.092030  260360 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1017 18:57:44.128328  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 18:57:44.129910  260360 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:44.129971  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1017 18:57:44.138351  260360 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1017 18:57:44.138423  260360 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1017 18:57:44.146443  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 18:57:44.166967  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 18:57:44.175833  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 18:57:44.185767  260360 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1017 18:57:44.185841  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1017 18:57:44.190022  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 18:57:44.195800  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 18:57:44.286620  260360 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1017 18:57:44.286694  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1017 18:57:44.313694  260360 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1017 18:57:44.313771  260360 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1017 18:57:44.338722  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:44.344756  260360 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1017 18:57:44.344828  260360 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1017 18:57:44.408026  260360 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1017 18:57:44.408109  260360 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1017 18:57:44.450555  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 18:57:44.509009  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1017 18:57:44.537695  260360 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1017 18:57:44.537769  260360 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1017 18:57:44.564453  260360 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1017 18:57:44.564568  260360 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1017 18:57:44.577565  260360 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 18:57:44.577644  260360 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1017 18:57:44.675600  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1017 18:57:44.675678  260360 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1017 18:57:44.703583  260360 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1017 18:57:44.703602  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1017 18:57:44.720924  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 18:57:44.723942  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1017 18:57:44.723962  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1017 18:57:44.836815  260360 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:44.836885  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1017 18:57:44.842296  260360 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.463299723s)
	I1017 18:57:44.842375  260360 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1017 18:57:44.843429  260360 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.306087188s)
	I1017 18:57:44.844187  260360 node_ready.go:35] waiting up to 6m0s for node "addons-379549" to be "Ready" ...
	I1017 18:57:44.902656  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1017 18:57:44.902733  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1017 18:57:44.906466  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1017 18:57:44.978560  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:44.992637  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1017 18:57:44.992709  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1017 18:57:45.263214  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1017 18:57:45.263308  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1017 18:57:45.363193  260360 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-379549" context rescaled to 1 replicas
	I1017 18:57:45.572937  260360 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1017 18:57:45.573018  260360 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1017 18:57:45.872996  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1017 18:57:45.873067  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1017 18:57:46.078170  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1017 18:57:46.078248  260360 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1017 18:57:46.249322  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1017 18:57:46.249396  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1017 18:57:46.453107  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.370878855s)
	I1017 18:57:46.453192  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.378008832s)
	I1017 18:57:46.461537  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1017 18:57:46.461599  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1017 18:57:46.618733  260360 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 18:57:46.618808  260360 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1017 18:57:46.770195  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1017 18:57:46.857249  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:47.393911  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.265510556s)
	I1017 18:57:47.455146  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.308600977s)
	W1017 18:57:48.860339  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:49.020395  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.83029357s)
	I1017 18:57:49.020502  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.824643462s)
	I1017 18:57:49.020834  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.682039164s)
	W1017 18:57:49.020893  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:49.020924  260360 retry.go:31] will retry after 286.470389ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:49.021012  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.570385128s)
	I1017 18:57:49.021217  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.512128972s)
	I1017 18:57:49.021254  260360 addons.go:479] Verifying addon registry=true in "addons-379549"
	I1017 18:57:49.021460  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.844435636s)
	I1017 18:57:49.021732  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.115203498s)
	I1017 18:57:49.021825  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.854787198s)
	I1017 18:57:49.021852  260360 addons.go:479] Verifying addon ingress=true in "addons-379549"
	I1017 18:57:49.021662  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.30071432s)
	I1017 18:57:49.022440  260360 addons.go:479] Verifying addon metrics-server=true in "addons-379549"
	I1017 18:57:49.022020  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.043376928s)
	W1017 18:57:49.022478  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 18:57:49.022491  260360 retry.go:31] will retry after 267.602499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 18:57:49.024729  260360 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-379549 service yakd-dashboard -n yakd-dashboard
	
	I1017 18:57:49.024792  260360 out.go:179] * Verifying ingress addon...
	I1017 18:57:49.024837  260360 out.go:179] * Verifying registry addon...
	I1017 18:57:49.029126  260360 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1017 18:57:49.029179  260360 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1017 18:57:49.039799  260360 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 18:57:49.039818  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:49.040326  260360 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 18:57:49.040345  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 18:57:49.049406  260360 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1017 18:57:49.291053  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 18:57:49.308089  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:49.322206  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.551902433s)
	I1017 18:57:49.322288  260360 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-379549"
	I1017 18:57:49.327449  260360 out.go:179] * Verifying csi-hostpath-driver addon...
	I1017 18:57:49.331281  260360 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1017 18:57:49.344227  260360 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 18:57:49.344253  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:49.534412  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:49.534561  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:49.835664  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.033819  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:50.034692  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:50.335330  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.533679  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:50.533915  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:50.762960  260360 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1017 18:57:50.763058  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:50.780095  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:50.835452  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:50.895660  260360 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1017 18:57:50.908459  260360 addons.go:238] Setting addon gcp-auth=true in "addons-379549"
	I1017 18:57:50.908509  260360 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:57:50.908986  260360 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:57:50.925597  260360 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1017 18:57:50.925654  260360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:57:50.949827  260360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:57:51.033059  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:51.033149  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:51.334076  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:51.347836  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:51.533494  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:51.533664  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:51.835365  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:52.033919  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:52.035029  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:52.062144  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.77095881s)
	I1017 18:57:52.062235  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.754064389s)
	W1017 18:57:52.062288  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:52.062323  260360 retry.go:31] will retry after 537.819655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:52.062328  260360 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.136706738s)
	I1017 18:57:52.065554  260360 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 18:57:52.068377  260360 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1017 18:57:52.071189  260360 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1017 18:57:52.071225  260360 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1017 18:57:52.085903  260360 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1017 18:57:52.085968  260360 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1017 18:57:52.099764  260360 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 18:57:52.099786  260360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1017 18:57:52.113693  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 18:57:52.334960  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:52.537724  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:52.537880  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:52.601014  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:52.610638  260360 addons.go:479] Verifying addon gcp-auth=true in "addons-379549"
	I1017 18:57:52.613688  260360 out.go:179] * Verifying gcp-auth addon...
	I1017 18:57:52.616454  260360 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1017 18:57:52.638491  260360 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1017 18:57:52.638519  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:52.835211  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:53.033679  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:53.033985  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:53.119992  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:53.334740  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:53.459391  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:53.459423  260360 retry.go:31] will retry after 709.24434ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:53.532370  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:53.532740  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:53.619385  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:53.834485  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:53.848165  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:54.032353  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:54.033102  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:54.119839  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:54.169211  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:54.334285  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:54.535599  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:54.536172  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:54.620960  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:54.834963  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:54.968996  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:54.969033  260360 retry.go:31] will retry after 1.014713465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:55.034099  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.034462  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:55.119731  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:55.335072  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:55.532475  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:55.532594  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:55.619821  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:55.834802  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:55.984931  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:56.034117  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:56.035015  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:56.120044  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:56.334245  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:56.348953  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:56.535857  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:56.536417  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:56.620509  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:56.800655  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:56.800689  260360 retry.go:31] will retry after 1.669080544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:56.834473  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:57.032395  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:57.032728  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:57.119411  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:57.334563  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:57.532865  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:57.533106  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:57.620352  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:57.835045  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:58.032291  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:58.032476  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:58.120346  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:58.334485  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:57:58.350845  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:57:58.470019  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:57:58.534220  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:58.534435  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:58.620260  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:58.835151  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:59.033307  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:59.033598  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:59.120222  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:57:59.257943  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:59.257974  260360 retry.go:31] will retry after 1.734205979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:57:59.334882  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:57:59.532850  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:57:59.533569  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:57:59.619549  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:57:59.834462  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.057961  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:00.058088  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:00.123238  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:00.335769  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:00.352149  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:00.533308  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:00.533760  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:00.619335  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:00.834581  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:00.992908  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:01.033669  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:01.034497  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:01.119622  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:01.337123  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:01.532743  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:01.533005  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:01.619583  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:58:01.820844  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:01.820925  260360 retry.go:31] will retry after 1.458897537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:01.834881  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:02.033335  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:02.033648  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:02.119264  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:02.335039  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:02.533750  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:02.533954  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:02.619514  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:02.834609  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:02.847344  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:03.033608  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:03.033895  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:03.119839  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:03.280014  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:03.334463  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:03.533579  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:03.533797  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:03.619825  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:03.834438  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:04.035208  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:04.035762  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 18:58:04.091732  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:04.091821  260360 retry.go:31] will retry after 6.060894765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:04.119972  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:04.334878  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:04.533578  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:04.533893  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:04.634324  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:04.834490  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:04.847474  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:05.032910  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:05.033096  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:05.120136  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:05.335062  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:05.533236  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:05.533446  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:05.621937  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:05.835114  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:06.033242  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:06.033639  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:06.119452  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:06.334579  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:06.532440  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:06.532818  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:06.619500  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:06.834567  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:07.032904  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:07.033084  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:07.119885  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:07.335033  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:07.347830  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:07.532895  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:07.532970  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:07.619928  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:07.834896  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:08.034425  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:08.034533  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:08.119974  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:08.334778  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:08.533400  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:08.533669  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:08.619700  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:08.834553  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:09.032807  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:09.033119  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:09.119672  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:09.334660  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:09.532132  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:09.532198  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:09.619532  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:09.834318  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:09.847826  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:10.037838  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:10.038507  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:10.119435  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:10.153516  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:10.335120  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:10.534505  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:10.534682  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:10.619489  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:10.834700  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:10.967512  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:10.967544  260360 retry.go:31] will retry after 6.543256703s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:11.032449  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:11.032849  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:11.119749  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:11.334849  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:11.532396  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:11.532654  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:11.619320  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:11.834213  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:12.032899  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:12.032979  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:12.119788  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:12.335348  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:12.346908  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:12.533150  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:12.533580  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:12.619315  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:12.834280  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:13.032251  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:13.032489  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:13.120335  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:13.334151  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:13.532667  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:13.532969  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:13.619719  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:13.834565  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:14.033201  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:14.033501  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:14.120123  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:14.334783  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:14.347445  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:14.533095  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:14.533139  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:14.619797  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:14.834757  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:15.033954  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:15.034017  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:15.119611  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:15.334523  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:15.532875  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:15.533202  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:15.620082  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:15.835395  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:16.033055  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:16.033708  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:16.119276  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:16.334163  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:16.533384  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:16.533571  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:16.620004  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:16.834856  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:16.847580  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:17.032905  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:17.032986  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:17.119607  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:17.335517  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:17.511553  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:17.534161  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:17.534938  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:17.620143  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:17.834680  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.034280  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:18.034852  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:18.119713  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:58:18.319519  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:18.319550  260360 retry.go:31] will retry after 5.014946963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:18.334614  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:18.532468  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:18.532611  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:18.619358  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:18.834618  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:19.032763  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:19.032926  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:19.119618  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:19.334581  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:19.347264  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:19.532247  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:19.532574  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:19.619248  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:19.834240  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:20.032945  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:20.033406  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:20.120183  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:20.334128  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:20.532993  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:20.533505  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:20.619252  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:20.834341  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:21.032710  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:21.032874  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:21.125286  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:21.334428  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:21.532372  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:21.532513  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:21.619323  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:21.834732  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:21.847477  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:22.032858  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:22.033039  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:22.120415  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:22.334523  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:22.532157  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:22.532303  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:22.620159  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:22.834257  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:23.032535  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:23.032931  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:23.119598  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:23.334777  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:23.334900  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:23.534064  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:23.534168  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:23.620434  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:23.834689  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 18:58:23.847732  260360 node_ready.go:57] node "addons-379549" has "Ready":"False" status (will retry)
	I1017 18:58:24.033397  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:24.034241  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:24.138262  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 18:58:24.206124  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:24.206151  260360 retry.go:31] will retry after 21.566932522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:24.364465  260360 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 18:58:24.364556  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:24.409298  260360 node_ready.go:49] node "addons-379549" is "Ready"
	I1017 18:58:24.409375  260360 node_ready.go:38] duration metric: took 39.56510904s for node "addons-379549" to be "Ready" ...
	I1017 18:58:24.409415  260360 api_server.go:52] waiting for apiserver process to appear ...
	I1017 18:58:24.409516  260360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 18:58:24.430912  260360 api_server.go:72] duration metric: took 41.793620969s to wait for apiserver process to appear ...
	I1017 18:58:24.430984  260360 api_server.go:88] waiting for apiserver healthz status ...
	I1017 18:58:24.431018  260360 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 18:58:24.439764  260360 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 18:58:24.448797  260360 api_server.go:141] control plane version: v1.34.1
	I1017 18:58:24.448836  260360 api_server.go:131] duration metric: took 17.825154ms to wait for apiserver health ...
	I1017 18:58:24.448846  260360 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 18:58:24.486646  260360 system_pods.go:59] 19 kube-system pods found
	I1017 18:58:24.486766  260360 system_pods.go:61] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Pending
	I1017 18:58:24.486809  260360 system_pods.go:61] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending
	I1017 18:58:24.486836  260360 system_pods.go:61] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending
	I1017 18:58:24.486858  260360 system_pods.go:61] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending
	I1017 18:58:24.486890  260360 system_pods.go:61] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:24.486915  260360 system_pods.go:61] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:24.486942  260360 system_pods.go:61] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:24.486979  260360 system_pods.go:61] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:24.487012  260360 system_pods.go:61] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending
	I1017 18:58:24.487033  260360 system_pods.go:61] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:24.487069  260360 system_pods.go:61] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:24.487105  260360 system_pods.go:61] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:24.487127  260360 system_pods.go:61] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending
	I1017 18:58:24.487167  260360 system_pods.go:61] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending
	I1017 18:58:24.487187  260360 system_pods.go:61] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending
	I1017 18:58:24.487209  260360 system_pods.go:61] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending
	I1017 18:58:24.487250  260360 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending
	I1017 18:58:24.487273  260360 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:24.487313  260360 system_pods.go:61] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:58:24.487339  260360 system_pods.go:74] duration metric: took 38.485342ms to wait for pod list to return data ...
	I1017 18:58:24.487369  260360 default_sa.go:34] waiting for default service account to be created ...
	I1017 18:58:24.565520  260360 default_sa.go:45] found service account: "default"
	I1017 18:58:24.565543  260360 default_sa.go:55] duration metric: took 78.143447ms for default service account to be created ...
	I1017 18:58:24.565553  260360 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 18:58:24.584678  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:24.584781  260360 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 18:58:24.584789  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:24.655731  260360 system_pods.go:86] 19 kube-system pods found
	I1017 18:58:24.655809  260360 system_pods.go:89] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Pending
	I1017 18:58:24.655834  260360 system_pods.go:89] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:58:24.655857  260360 system_pods.go:89] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending
	I1017 18:58:24.655891  260360 system_pods.go:89] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending
	I1017 18:58:24.655915  260360 system_pods.go:89] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:24.655936  260360 system_pods.go:89] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:24.655971  260360 system_pods.go:89] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:24.655995  260360 system_pods.go:89] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:24.656014  260360 system_pods.go:89] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending
	I1017 18:58:24.656047  260360 system_pods.go:89] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:24.656070  260360 system_pods.go:89] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:24.656091  260360 system_pods.go:89] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:24.656110  260360 system_pods.go:89] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending
	I1017 18:58:24.656147  260360 system_pods.go:89] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending
	I1017 18:58:24.656164  260360 system_pods.go:89] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending
	I1017 18:58:24.656184  260360 system_pods.go:89] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending
	I1017 18:58:24.656214  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending
	I1017 18:58:24.656242  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:24.656266  260360 system_pods.go:89] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:58:24.656311  260360 retry.go:31] will retry after 256.846359ms: missing components: kube-dns
	I1017 18:58:24.667537  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:24.840424  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:24.923607  260360 system_pods.go:86] 19 kube-system pods found
	I1017 18:58:24.923695  260360 system_pods.go:89] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:58:24.923721  260360 system_pods.go:89] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:58:24.923760  260360 system_pods.go:89] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending
	I1017 18:58:24.923783  260360 system_pods.go:89] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending
	I1017 18:58:24.923801  260360 system_pods.go:89] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:24.923823  260360 system_pods.go:89] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:24.923856  260360 system_pods.go:89] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:24.923880  260360 system_pods.go:89] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:24.923901  260360 system_pods.go:89] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending
	I1017 18:58:24.923942  260360 system_pods.go:89] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:24.923966  260360 system_pods.go:89] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:24.923996  260360 system_pods.go:89] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:24.924031  260360 system_pods.go:89] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:58:24.924056  260360 system_pods.go:89] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:58:24.924079  260360 system_pods.go:89] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:58:24.924117  260360 system_pods.go:89] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending
	I1017 18:58:24.924145  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:24.924168  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:24.924206  260360 system_pods.go:89] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:58:24.924243  260360 retry.go:31] will retry after 287.083262ms: missing components: kube-dns
	I1017 18:58:25.041469  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:25.048448  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:25.121138  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:25.218607  260360 system_pods.go:86] 19 kube-system pods found
	I1017 18:58:25.218641  260360 system_pods.go:89] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:58:25.218650  260360 system_pods.go:89] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:58:25.218658  260360 system_pods.go:89] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:58:25.218665  260360 system_pods.go:89] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:58:25.218677  260360 system_pods.go:89] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:25.218682  260360 system_pods.go:89] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:25.218696  260360 system_pods.go:89] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:25.218701  260360 system_pods.go:89] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:25.218708  260360 system_pods.go:89] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:58:25.218715  260360 system_pods.go:89] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:25.218720  260360 system_pods.go:89] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:25.218729  260360 system_pods.go:89] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:25.218739  260360 system_pods.go:89] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:58:25.218747  260360 system_pods.go:89] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:58:25.218753  260360 system_pods.go:89] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:58:25.218759  260360 system_pods.go:89] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:58:25.218765  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:25.218772  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:25.218778  260360 system_pods.go:89] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:58:25.218792  260360 retry.go:31] will retry after 366.873436ms: missing components: kube-dns
	I1017 18:58:25.335677  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:25.535050  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:25.535376  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:25.590424  260360 system_pods.go:86] 19 kube-system pods found
	I1017 18:58:25.590466  260360 system_pods.go:89] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 18:58:25.590476  260360 system_pods.go:89] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:58:25.590483  260360 system_pods.go:89] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:58:25.590491  260360 system_pods.go:89] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:58:25.590496  260360 system_pods.go:89] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:25.590502  260360 system_pods.go:89] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:25.590511  260360 system_pods.go:89] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:25.590516  260360 system_pods.go:89] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:25.590525  260360 system_pods.go:89] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:58:25.590529  260360 system_pods.go:89] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:25.590534  260360 system_pods.go:89] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:25.590547  260360 system_pods.go:89] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:25.590554  260360 system_pods.go:89] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:58:25.590566  260360 system_pods.go:89] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:58:25.590576  260360 system_pods.go:89] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:58:25.590591  260360 system_pods.go:89] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:58:25.590598  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:25.590608  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:25.590616  260360 system_pods.go:89] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 18:58:25.590631  260360 retry.go:31] will retry after 450.765843ms: missing components: kube-dns
	I1017 18:58:25.619533  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:25.864117  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:26.034379  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:26.035377  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:26.046914  260360 system_pods.go:86] 19 kube-system pods found
	I1017 18:58:26.046950  260360 system_pods.go:89] "coredns-66bc5c9577-cdn2p" [1f00660c-1ffb-43d1-9696-f2d467c8d695] Running
	I1017 18:58:26.046961  260360 system_pods.go:89] "csi-hostpath-attacher-0" [f9f7eaeb-2121-444d-a3a1-a63c14345e11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 18:58:26.046969  260360 system_pods.go:89] "csi-hostpath-resizer-0" [55e67c03-83b5-4067-ad75-6989391f3bc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 18:58:26.046978  260360 system_pods.go:89] "csi-hostpathplugin-dnj9h" [21c0c3df-9209-4bc9-97b5-6df190d961ac] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 18:58:26.046985  260360 system_pods.go:89] "etcd-addons-379549" [7f7f777a-ca00-4fb0-a88d-83320ec99ef4] Running
	I1017 18:58:26.046990  260360 system_pods.go:89] "kindnet-2gclq" [5af0053d-cab8-47ce-992f-5f170221eb75] Running
	I1017 18:58:26.046995  260360 system_pods.go:89] "kube-apiserver-addons-379549" [2a84a283-09ca-4044-88f4-5bab2d437a1c] Running
	I1017 18:58:26.047000  260360 system_pods.go:89] "kube-controller-manager-addons-379549" [a942dd2b-1f45-4f12-a9da-9c44240aeb3b] Running
	I1017 18:58:26.047006  260360 system_pods.go:89] "kube-ingress-dns-minikube" [a5bc83dd-0e62-49bd-bd0f-ced72e1e81d3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 18:58:26.047011  260360 system_pods.go:89] "kube-proxy-9fnkd" [a408204b-db68-48f1-bd0b-fdc7a107dd53] Running
	I1017 18:58:26.047021  260360 system_pods.go:89] "kube-scheduler-addons-379549" [0d4dd7af-36a4-4d02-8185-240b7866dc35] Running
	I1017 18:58:26.047028  260360 system_pods.go:89] "metrics-server-85b7d694d7-kx9vs" [3f92a023-86a2-48df-b062-25036c73dd56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 18:58:26.047038  260360 system_pods.go:89] "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 18:58:26.047045  260360 system_pods.go:89] "registry-6b586f9694-lggv9" [27b5c261-0db7-4e88-84bf-fe4b05cf5968] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 18:58:26.047052  260360 system_pods.go:89] "registry-creds-764b6fb674-v5s46" [26e0457e-0841-4658-b957-473746bb21d1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 18:58:26.047065  260360 system_pods.go:89] "registry-proxy-q985d" [2a95f94d-0609-4773-8345-e3789378c865] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 18:58:26.047072  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8j5lv" [c500fc45-7077-4fec-ba79-fbad181c1d02] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:26.047083  260360 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ctqmz" [b812c0ac-9f8f-409b-a8e0-f050f510849d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 18:58:26.047088  260360 system_pods.go:89] "storage-provisioner" [a4d946ce-92ed-46d9-a359-bbe460092cbb] Running
	I1017 18:58:26.047099  260360 system_pods.go:126] duration metric: took 1.481540846s to wait for k8s-apps to be running ...
	I1017 18:58:26.047115  260360 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 18:58:26.047170  260360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 18:58:26.063784  260360 system_svc.go:56] duration metric: took 16.644894ms WaitForService to wait for kubelet
	I1017 18:58:26.063867  260360 kubeadm.go:586] duration metric: took 43.426580127s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 18:58:26.063902  260360 node_conditions.go:102] verifying NodePressure condition ...
	I1017 18:58:26.067012  260360 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 18:58:26.067046  260360 node_conditions.go:123] node cpu capacity is 2
	I1017 18:58:26.067060  260360 node_conditions.go:105] duration metric: took 3.125218ms to run NodePressure ...
	I1017 18:58:26.067075  260360 start.go:241] waiting for startup goroutines ...
	I1017 18:58:26.135031  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:26.334252  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:26.533539  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:26.533825  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:26.620115  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:26.848210  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:27.033484  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:27.033712  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:27.119336  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:27.334885  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:27.535178  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:27.535681  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:27.619810  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:27.840660  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:28.033767  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:28.034388  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:28.134334  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:28.335707  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:28.534265  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:28.534641  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:28.619925  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:28.849796  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:29.037617  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:29.038573  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:29.137926  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:29.336505  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:29.534567  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:29.535021  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:29.620048  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:29.835689  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:30.039405  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:30.039852  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:30.120077  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:30.337317  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:30.536182  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:30.536545  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:30.619327  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:30.834993  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:31.033813  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:31.034060  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:31.133809  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:31.339835  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:31.538379  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:31.538716  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:31.619766  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:31.835507  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:32.033967  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:32.034090  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:32.120294  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:32.334983  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:32.532961  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:32.533130  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:32.620838  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:32.835384  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:33.033897  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:33.034489  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:33.119374  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:33.334816  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:33.533914  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:33.534087  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:33.620178  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:33.834217  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:34.033893  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:34.034279  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:34.119883  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:34.335525  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:34.532605  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:34.533181  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:34.620048  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:34.834261  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:35.033300  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:35.033523  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:35.119416  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:35.335967  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:35.534336  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:35.534990  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:35.620399  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:35.834967  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:36.034609  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:36.034931  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:36.120594  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:36.335121  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:36.534093  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:36.534309  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:36.620314  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:36.835600  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:37.035582  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:37.035863  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:37.119972  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:37.335368  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:37.534418  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:37.534718  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:37.619926  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:37.835961  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:38.034339  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:38.035020  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:38.120018  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:38.335912  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:38.532649  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:38.533898  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:38.620305  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:38.835669  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:39.034249  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:39.034522  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:39.134227  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:39.335565  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:39.533451  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:39.533741  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:39.620427  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:39.835237  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:40.056051  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:40.056588  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:40.149214  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:40.336085  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:40.534204  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:40.534663  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:40.619928  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:40.835824  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:41.034289  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:41.034769  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:41.119679  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:41.335892  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:41.534924  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:41.535334  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:41.620123  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:41.834172  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:42.035699  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:42.037403  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:42.119897  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:42.335507  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:42.534568  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:42.538824  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:42.639081  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:42.835244  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:43.033471  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:43.033622  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:43.119313  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:43.334694  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:43.534439  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:43.534556  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:43.619416  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:43.834248  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:44.035932  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:44.036479  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:44.119513  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:44.335163  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:44.534009  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:44.534438  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:44.619321  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:44.834884  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:45.047934  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:45.047953  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:45.122529  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:45.337881  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:45.535501  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:45.536015  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:45.619997  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:45.773288  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:58:45.834887  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:46.033628  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:46.033804  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:46.119888  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:46.335799  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:46.533988  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:46.534255  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:46.619552  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:46.835244  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:46.897801  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.124464489s)
	W1017 18:58:46.897839  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:46.897865  260360 retry.go:31] will retry after 17.010967715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 18:58:47.035030  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:47.035298  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:47.119880  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:47.337610  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:47.534533  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:47.535042  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:47.633891  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:47.835286  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:48.034126  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:48.034295  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:48.120425  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:48.334615  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:48.533864  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:48.534274  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:48.621469  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:48.835356  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:49.036735  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:49.036973  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:49.120299  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:49.335200  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:49.536721  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:49.537361  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:49.620233  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:49.835958  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:50.034494  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:50.034714  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:50.119438  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:50.334783  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:50.533674  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:50.534129  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:50.620309  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:50.835036  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:51.034365  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:51.034790  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:51.120236  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:51.334943  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:51.538332  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:51.546203  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:51.621959  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:51.835528  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:52.033706  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:52.034500  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:52.119220  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:52.334327  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:52.534132  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:52.534341  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:52.634091  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:52.835661  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:53.043729  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:53.044129  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:53.141082  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:53.335711  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:53.534498  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:53.534927  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:53.620106  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:53.834682  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:54.034604  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:54.034863  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:54.134888  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:54.335098  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:54.533978  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:54.534934  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:54.619632  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:54.834647  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:55.033545  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:55.034210  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:55.119936  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:55.335283  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:55.533563  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:55.534755  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:55.620201  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:55.835361  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:56.037987  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:56.038712  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:56.119758  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:56.335967  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:56.533773  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:56.534591  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:56.619763  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:56.836299  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:57.034884  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:57.035450  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:57.119601  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:57.337730  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:57.536699  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:57.537131  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:57.620775  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:57.835719  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:58.033314  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:58.033933  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:58.120495  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:58.335432  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:58.533106  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:58.534061  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:58.619636  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:58.835244  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:59.035221  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:59.035629  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:59.120185  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:59.335790  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:58:59.533292  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:58:59.534460  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:58:59.634054  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:58:59.837281  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:00.048795  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:00.048908  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:00.120540  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:00.336246  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:00.534552  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:00.535152  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:00.621428  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:00.837307  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:01.034610  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:01.034752  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:01.119530  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:01.335616  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:01.533999  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:01.534124  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:01.624980  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:01.835837  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:02.035175  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:02.035418  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:02.120189  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:02.335135  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:02.534385  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:02.534817  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:02.619774  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:02.835034  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:03.033265  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:03.033409  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:03.119310  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:03.334901  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:03.533796  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:03.534174  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:03.620775  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:03.836058  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:03.909332  260360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 18:59:04.034972  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:04.035388  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:04.120645  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:04.335912  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:04.534354  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:04.534967  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:04.619523  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:04.835788  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:05.037159  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:05.037536  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:05.052088  260360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.14271416s)
	W1017 18:59:05.052128  260360 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 18:59:05.052206  260360 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1017 18:59:05.120152  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:05.334580  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:05.534835  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:05.534979  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:05.619560  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:05.835283  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:06.034655  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:06.035115  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:06.120334  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:06.335159  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:06.534383  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:06.534904  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:06.620114  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:06.835777  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:07.034234  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:07.034545  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:07.119599  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:07.335234  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:07.532969  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:07.533659  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:07.619385  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:07.835348  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:08.036905  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:08.037168  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:08.123838  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:08.336431  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:08.535624  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:08.536080  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:08.620782  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:08.836043  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:09.034594  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:09.035107  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:09.128814  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:09.341301  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:09.533519  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:09.533852  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:09.619598  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:09.834933  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:10.034996  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:10.035409  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:10.120558  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:10.334995  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:10.533086  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:10.533448  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:10.619501  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:10.835031  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:11.032777  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:11.032832  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:11.124720  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:11.335357  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:11.533677  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:11.533836  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:11.619637  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:11.838891  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:12.037250  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 18:59:12.037440  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:12.119268  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:12.335860  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:12.533926  260360 kapi.go:107] duration metric: took 1m23.504796959s to wait for kubernetes.io/minikube-addons=registry ...
	I1017 18:59:12.534253  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:12.619930  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:12.835949  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:13.034643  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:13.119767  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:13.338319  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:13.532872  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:13.619824  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:13.836277  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:14.032735  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:14.121718  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:14.335910  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:14.533340  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:14.620134  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:14.835468  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:15.034396  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:15.120386  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:15.335373  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:15.532830  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:15.620527  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:15.835836  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:16.033272  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:16.120195  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:16.334395  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:16.535760  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:16.637762  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:16.835792  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:17.033180  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:17.120208  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:17.335361  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:17.532493  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:17.619971  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:17.835713  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:18.032991  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:18.119881  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:18.335726  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:18.533023  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:18.620140  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:18.838004  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:19.033403  260360 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 18:59:19.133142  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:19.334342  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:19.532661  260360 kapi.go:107] duration metric: took 1m30.503478233s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1017 18:59:19.619702  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:19.836043  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:20.119995  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:20.425623  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:20.619788  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:20.836360  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:21.120884  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:21.335752  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:21.626657  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:21.836266  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:22.119509  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:22.335001  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:22.619943  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:22.836151  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:23.121698  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:23.338646  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:23.619286  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:23.835349  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:24.121448  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:24.336454  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:24.620325  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:24.835783  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:25.120369  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:25.335352  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:25.619886  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:25.841843  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:26.120383  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:26.335022  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:26.622480  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:26.835340  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:27.119858  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:27.335601  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:27.619533  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:27.837109  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:28.121035  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:28.339181  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:28.620905  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:28.835084  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:29.119771  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:29.335255  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:29.620392  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:29.834830  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 18:59:30.119872  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:30.335405  260360 kapi.go:107] duration metric: took 1m41.004120887s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1017 18:59:30.620277  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:31.120820  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:31.621008  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:32.119854  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:32.620613  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:33.120134  260360 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 18:59:33.620117  260360 kapi.go:107] duration metric: took 1m41.00366236s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1017 18:59:33.637668  260360 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-379549 cluster.
	I1017 18:59:33.652134  260360 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1017 18:59:33.662273  260360 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1017 18:59:33.669721  260360 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, registry-creds, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1017 18:59:33.671195  260360 addons.go:514] duration metric: took 1m51.033396982s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns registry-creds amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1017 18:59:33.671251  260360 start.go:246] waiting for cluster config update ...
	I1017 18:59:33.671271  260360 start.go:255] writing updated cluster config ...
	I1017 18:59:33.671570  260360 ssh_runner.go:195] Run: rm -f paused
	I1017 18:59:33.675968  260360 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 18:59:33.679424  260360 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cdn2p" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.684198  260360 pod_ready.go:94] pod "coredns-66bc5c9577-cdn2p" is "Ready"
	I1017 18:59:33.684227  260360 pod_ready.go:86] duration metric: took 4.779107ms for pod "coredns-66bc5c9577-cdn2p" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.686802  260360 pod_ready.go:83] waiting for pod "etcd-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.691629  260360 pod_ready.go:94] pod "etcd-addons-379549" is "Ready"
	I1017 18:59:33.691657  260360 pod_ready.go:86] duration metric: took 4.827213ms for pod "etcd-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.694355  260360 pod_ready.go:83] waiting for pod "kube-apiserver-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.699110  260360 pod_ready.go:94] pod "kube-apiserver-addons-379549" is "Ready"
	I1017 18:59:33.699143  260360 pod_ready.go:86] duration metric: took 4.761639ms for pod "kube-apiserver-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:33.701516  260360 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:34.080314  260360 pod_ready.go:94] pod "kube-controller-manager-addons-379549" is "Ready"
	I1017 18:59:34.080343  260360 pod_ready.go:86] duration metric: took 378.800183ms for pod "kube-controller-manager-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:34.280710  260360 pod_ready.go:83] waiting for pod "kube-proxy-9fnkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:34.680139  260360 pod_ready.go:94] pod "kube-proxy-9fnkd" is "Ready"
	I1017 18:59:34.680164  260360 pod_ready.go:86] duration metric: took 399.422879ms for pod "kube-proxy-9fnkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:34.880504  260360 pod_ready.go:83] waiting for pod "kube-scheduler-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:35.279822  260360 pod_ready.go:94] pod "kube-scheduler-addons-379549" is "Ready"
	I1017 18:59:35.279855  260360 pod_ready.go:86] duration metric: took 399.256483ms for pod "kube-scheduler-addons-379549" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 18:59:35.279869  260360 pod_ready.go:40] duration metric: took 1.603866957s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 18:59:35.347507  260360 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 18:59:35.348983  260360 out.go:179] * Done! kubectl is now configured to use "addons-379549" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 18:59:32 addons-379549 crio[833]: time="2025-10-17T18:59:32.412706031Z" level=info msg="Created container 33c887c51c9775153c4f58b08791de8b5bcd6c2887c892fe45a78af221c928fd: gcp-auth/gcp-auth-78565c9fb4-4z5sp/gcp-auth" id=9a721fa4-30ee-4c94-a6ee-0659ce6d9084 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 18:59:32 addons-379549 crio[833]: time="2025-10-17T18:59:32.414516059Z" level=info msg="Starting container: 33c887c51c9775153c4f58b08791de8b5bcd6c2887c892fe45a78af221c928fd" id=633bd15d-35c3-468f-9a69-ed274f470ac4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 18:59:32 addons-379549 crio[833]: time="2025-10-17T18:59:32.418219995Z" level=info msg="Started container" PID=4966 containerID=33c887c51c9775153c4f58b08791de8b5bcd6c2887c892fe45a78af221c928fd description=gcp-auth/gcp-auth-78565c9fb4-4z5sp/gcp-auth id=633bd15d-35c3-468f-9a69-ed274f470ac4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e132ef3e844e737f3b3593a210491b90e64ec1f46226786710cdcbef9f8156c7
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.336354483Z" level=info msg="Running pod sandbox: default/busybox/POD" id=53b6f326-ef5a-4cc4-9343-604382ac3cda name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.336440183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.349230805Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9bc5f8b2117c1ea4e43b23c7587d4d3060adc7c718f402654aef9cb6c9077438 UID:34227125-3da8-44cc-bbcf-a3085cf718b7 NetNS:/var/run/netns/8e6dc271-21e3-42cf-8929-2e5f57da65e1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001e24948}] Aliases:map[]}"
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.349274217Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.3581849Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9bc5f8b2117c1ea4e43b23c7587d4d3060adc7c718f402654aef9cb6c9077438 UID:34227125-3da8-44cc-bbcf-a3085cf718b7 NetNS:/var/run/netns/8e6dc271-21e3-42cf-8929-2e5f57da65e1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001e24948}] Aliases:map[]}"
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.358338258Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.361444933Z" level=info msg="Ran pod sandbox 9bc5f8b2117c1ea4e43b23c7587d4d3060adc7c718f402654aef9cb6c9077438 with infra container: default/busybox/POD" id=53b6f326-ef5a-4cc4-9343-604382ac3cda name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.362673791Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ebfda19b-788e-4238-8356-76c826e9e8a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.362783269Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ebfda19b-788e-4238-8356-76c826e9e8a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.362817763Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ebfda19b-788e-4238-8356-76c826e9e8a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.366654052Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c810fa1a-4dc2-4e00-adc9-c215b0620e99 name=/runtime.v1.ImageService/PullImage
	Oct 17 18:59:36 addons-379549 crio[833]: time="2025-10-17T18:59:36.370487863Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 18:59:38 addons-379549 crio[833]: time="2025-10-17T18:59:38.271391426Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c810fa1a-4dc2-4e00-adc9-c215b0620e99 name=/runtime.v1.ImageService/PullImage
	Oct 17 18:59:38 addons-379549 crio[833]: time="2025-10-17T18:59:38.272263633Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=87350d59-611e-40a6-abb2-56c1b373e5d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 18:59:38 addons-379549 crio[833]: time="2025-10-17T18:59:38.276188412Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=55d3f253-06a3-430b-9788-77d03f5b850b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 18:59:38 addons-379549 crio[833]: time="2025-10-17T18:59:38.282756576Z" level=info msg="Creating container: default/busybox/busybox" id=b6a5c407-033c-48f3-a722-e2f526bceb7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 18:59:38 addons-379549 crio[833]: time="2025-10-17T18:59:38.283617894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 18:59:38 addons-379549 crio[833]: time="2025-10-17T18:59:38.29783782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 18:59:38 addons-379549 crio[833]: time="2025-10-17T18:59:38.298429673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 18:59:38 addons-379549 crio[833]: time="2025-10-17T18:59:38.320866327Z" level=info msg="Created container 1e748d09ef99c66ebb2f6f883e59acb55930df46d1363aac0cd60790715a64bc: default/busybox/busybox" id=b6a5c407-033c-48f3-a722-e2f526bceb7e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 18:59:38 addons-379549 crio[833]: time="2025-10-17T18:59:38.322547751Z" level=info msg="Starting container: 1e748d09ef99c66ebb2f6f883e59acb55930df46d1363aac0cd60790715a64bc" id=e0f330c6-bf03-4c00-b8a8-76500b3efa54 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 18:59:38 addons-379549 crio[833]: time="2025-10-17T18:59:38.324229159Z" level=info msg="Started container" PID=5066 containerID=1e748d09ef99c66ebb2f6f883e59acb55930df46d1363aac0cd60790715a64bc description=default/busybox/busybox id=e0f330c6-bf03-4c00-b8a8-76500b3efa54 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bc5f8b2117c1ea4e43b23c7587d4d3060adc7c718f402654aef9cb6c9077438
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	1e748d09ef99c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   9bc5f8b2117c1       busybox                                     default
	33c887c51c977       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 15 seconds ago       Running             gcp-auth                                 0                   e132ef3e844e7       gcp-auth-78565c9fb4-4z5sp                   gcp-auth
	5cf24bffa8a4a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          18 seconds ago       Running             csi-snapshotter                          0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	80799fb75c916       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          19 seconds ago       Running             csi-provisioner                          0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	6fde7d0006c1a       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            21 seconds ago       Running             liveness-probe                           0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	92b113c7cfe79       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           21 seconds ago       Running             hostpath                                 0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	5651bbb1546ea       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                23 seconds ago       Running             node-driver-registrar                    0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	68de827b898e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            24 seconds ago       Running             gadget                                   0                   a831395674642       gadget-9vfvf                                gadget
	7964e74b18162       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             28 seconds ago       Running             controller                               0                   40a4d10244fcd       ingress-nginx-controller-675c5ddd98-qx9b8   ingress-nginx
	d21a27e2ee97e       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             28 seconds ago       Exited              patch                                    3                   19b3ae1490ba2       gcp-auth-certs-patch-gzmf4                  gcp-auth
	85fd1c198568a       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             34 seconds ago       Running             local-path-provisioner                   0                   99d17967d35b9       local-path-provisioner-648f6765c9-mtqnt     local-path-storage
	b06455475d2b3       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              35 seconds ago       Running             registry-proxy                           0                   8b1b6ae624127       registry-proxy-q985d                        kube-system
	97f607e6b94a5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   38 seconds ago       Exited              create                                   0                   082e75cb2425a       gcp-auth-certs-create-p28s5                 gcp-auth
	ce48b4c920d81       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      39 seconds ago       Running             volume-snapshot-controller               0                   df2754c682823       snapshot-controller-7d9fbc56b8-8j5lv        kube-system
	accf4579f8250       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               39 seconds ago       Running             minikube-ingress-dns                     0                   76cff2858c800       kube-ingress-dns-minikube                   kube-system
	fb1f7d0e065d8       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   47 seconds ago       Running             csi-external-health-monitor-controller   0                   2551bf1f3f65e       csi-hostpathplugin-dnj9h                    kube-system
	a9161ab91cb06       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              49 seconds ago       Running             yakd                                     0                   dcd74e1c65f91       yakd-dashboard-5ff678cb9-pk6pq              yakd-dashboard
	3986728e63c14       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              53 seconds ago       Running             csi-resizer                              0                   4b982a81e247a       csi-hostpath-resizer-0                      kube-system
	88eee337e7ec6       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     54 seconds ago       Running             nvidia-device-plugin-ctr                 0                   0aab0381cd387       nvidia-device-plugin-daemonset-5tz6p        kube-system
	287b90d2b10db       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             56 seconds ago       Exited              patch                                    2                   46d75b61673b8       ingress-nginx-admission-patch-5dn9f         ingress-nginx
	8e63327d94af6       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   3afa7f090868b       cloud-spanner-emulator-86bd5cbb97-9vn6g     default
	012db353f99b6       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   7071a6ec8434e       csi-hostpath-attacher-0                     kube-system
	9361ebb005625       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   d015ae841fce6       snapshot-controller-7d9fbc56b8-ctqmz        kube-system
	de5165e5bfa9f       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   c53f2241eb7ca       registry-6b586f9694-lggv9                   kube-system
	5b8f14f3c7ff8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   5b96bd60502c6       ingress-nginx-admission-create-x76j5        ingress-nginx
	37d41037f4ee9       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   03639054dada5       metrics-server-85b7d694d7-kx9vs             kube-system
	c83ac4cff13e7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   bc5f262ce206c       storage-provisioner                         kube-system
	70437ef145370       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   01efd9fefe6c7       coredns-66bc5c9577-cdn2p                    kube-system
	0c926298efaa6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   c08fd3909fd19       kindnet-2gclq                               kube-system
	ad27f04cf6a14       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   065a4b9c92fcd       kube-proxy-9fnkd                            kube-system
	22a266e5672ab       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   1c1968ed28531       kube-controller-manager-addons-379549       kube-system
	beb0486de70d8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   d89a9b4a4fa6e       etcd-addons-379549                          kube-system
	04fd09957b07c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   427e79f9576b5       kube-scheduler-addons-379549                kube-system
	612fc65e5e866       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   9b93e367ae672       kube-apiserver-addons-379549                kube-system
	
	
	==> coredns [70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b] <==
	[INFO] 10.244.0.16:54869 - 63964 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000196425s
	[INFO] 10.244.0.16:54869 - 65010 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002225493s
	[INFO] 10.244.0.16:54869 - 38920 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002294561s
	[INFO] 10.244.0.16:54869 - 20198 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000233922s
	[INFO] 10.244.0.16:54869 - 61943 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000170867s
	[INFO] 10.244.0.16:57141 - 12248 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00018553s
	[INFO] 10.244.0.16:57141 - 11787 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000248584s
	[INFO] 10.244.0.16:41176 - 19456 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117921s
	[INFO] 10.244.0.16:41176 - 19259 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090623s
	[INFO] 10.244.0.16:47833 - 39241 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093536s
	[INFO] 10.244.0.16:47833 - 39068 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092174s
	[INFO] 10.244.0.16:43188 - 11912 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001837301s
	[INFO] 10.244.0.16:43188 - 11732 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.008709334s
	[INFO] 10.244.0.16:45159 - 12007 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196623s
	[INFO] 10.244.0.16:45159 - 11670 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000331856s
	[INFO] 10.244.0.21:57605 - 18808 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0001991s
	[INFO] 10.244.0.21:37288 - 39638 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0002848s
	[INFO] 10.244.0.21:45241 - 9411 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139648s
	[INFO] 10.244.0.21:34258 - 48519 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000233101s
	[INFO] 10.244.0.21:58372 - 59605 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125207s
	[INFO] 10.244.0.21:32826 - 33803 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000225495s
	[INFO] 10.244.0.21:44498 - 8327 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004329806s
	[INFO] 10.244.0.21:38031 - 33619 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004695557s
	[INFO] 10.244.0.21:54542 - 4722 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003169147s
	[INFO] 10.244.0.21:37871 - 8745 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003311191s
	
	
	==> describe nodes <==
	Name:               addons-379549
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-379549
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=addons-379549
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T18_57_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-379549
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-379549"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 18:57:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-379549
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 18:59:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 18:59:40 +0000   Fri, 17 Oct 2025 18:57:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 18:59:40 +0000   Fri, 17 Oct 2025 18:57:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 18:59:40 +0000   Fri, 17 Oct 2025 18:57:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 18:59:40 +0000   Fri, 17 Oct 2025 18:58:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-379549
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b47e3782-9a4d-4307-bd31-a9c8af0ab3fc
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-9vn6g      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  gadget                      gadget-9vfvf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  gcp-auth                    gcp-auth-78565c9fb4-4z5sp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-qx9b8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         119s
	  kube-system                 coredns-66bc5c9577-cdn2p                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m5s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 csi-hostpathplugin-dnj9h                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 etcd-addons-379549                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m10s
	  kube-system                 kindnet-2gclq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m5s
	  kube-system                 kube-apiserver-addons-379549                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-addons-379549        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-9fnkd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-scheduler-addons-379549                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 metrics-server-85b7d694d7-kx9vs              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         119s
	  kube-system                 nvidia-device-plugin-daemonset-5tz6p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 registry-6b586f9694-lggv9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 registry-creds-764b6fb674-v5s46              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-proxy-q985d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 snapshot-controller-7d9fbc56b8-8j5lv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 snapshot-controller-7d9fbc56b8-ctqmz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  local-path-storage          local-path-provisioner-648f6765c9-mtqnt      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-pk6pq               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m3s                   kube-proxy       
	  Normal   Starting                 2m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node addons-379549 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node addons-379549 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m17s (x8 over 2m17s)  kubelet          Node addons-379549 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s                  kubelet          Node addons-379549 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s                  kubelet          Node addons-379549 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s                  kubelet          Node addons-379549 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m6s                   node-controller  Node addons-379549 event: Registered Node addons-379549 in Controller
	  Normal   NodeReady                83s                    kubelet          Node addons-379549 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct17 18:19] overlayfs: idmapped layers are currently not supported
	[Oct17 18:20] overlayfs: idmapped layers are currently not supported
	[ +27.630815] overlayfs: idmapped layers are currently not supported
	[ +17.813448] overlayfs: idmapped layers are currently not supported
	[Oct17 18:24] overlayfs: idmapped layers are currently not supported
	[ +30.872028] overlayfs: idmapped layers are currently not supported
	[Oct17 18:25] overlayfs: idmapped layers are currently not supported
	[Oct17 18:27] overlayfs: idmapped layers are currently not supported
	[Oct17 18:29] overlayfs: idmapped layers are currently not supported
	[Oct17 18:30] overlayfs: idmapped layers are currently not supported
	[Oct17 18:31] overlayfs: idmapped layers are currently not supported
	[  +9.357480] overlayfs: idmapped layers are currently not supported
	[Oct17 18:33] overlayfs: idmapped layers are currently not supported
	[  +5.779853] overlayfs: idmapped layers are currently not supported
	[Oct17 18:34] overlayfs: idmapped layers are currently not supported
	[Oct17 18:35] overlayfs: idmapped layers are currently not supported
	[Oct17 18:36] overlayfs: idmapped layers are currently not supported
	[ +20.850590] overlayfs: idmapped layers are currently not supported
	[Oct17 18:38] overlayfs: idmapped layers are currently not supported
	[ +19.812679] overlayfs: idmapped layers are currently not supported
	[Oct17 18:39] overlayfs: idmapped layers are currently not supported
	[ +19.225178] overlayfs: idmapped layers are currently not supported
	[Oct17 18:40] overlayfs: idmapped layers are currently not supported
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 18:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02] <==
	{"level":"warn","ts":"2025-10-17T18:57:33.358513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.373157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.388755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.411822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.422265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.439273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.458057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.478681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.493239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.513351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.531673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.545406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.564041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.577147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.601684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.623779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.646521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.656111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:33.747307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:49.688731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:57:49.705081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:58:11.733119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:58:11.747376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:58:11.780093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T18:58:11.788547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44944","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [33c887c51c9775153c4f58b08791de8b5bcd6c2887c892fe45a78af221c928fd] <==
	2025/10/17 18:59:32 GCP Auth Webhook started!
	2025/10/17 18:59:35 Ready to marshal response ...
	2025/10/17 18:59:35 Ready to write response ...
	2025/10/17 18:59:36 Ready to marshal response ...
	2025/10/17 18:59:36 Ready to write response ...
	2025/10/17 18:59:36 Ready to marshal response ...
	2025/10/17 18:59:36 Ready to write response ...
	
	
	==> kernel <==
	 18:59:47 up  1:42,  0 user,  load average: 2.10, 1.29, 1.48
	Linux addons-379549 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02] <==
	E1017 18:58:13.715789       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 18:58:13.724285       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 18:58:13.724293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 18:58:13.725549       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1017 18:58:14.924213       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 18:58:14.924311       1 metrics.go:72] Registering metrics
	I1017 18:58:14.924385       1 controller.go:711] "Syncing nftables rules"
	I1017 18:58:23.716616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:58:23.716707       1 main.go:301] handling current node
	I1017 18:58:33.715840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:58:33.715888       1 main.go:301] handling current node
	I1017 18:58:43.717596       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:58:43.717623       1 main.go:301] handling current node
	I1017 18:58:53.716193       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:58:53.716223       1 main.go:301] handling current node
	I1017 18:59:03.716646       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:59:03.716688       1 main.go:301] handling current node
	I1017 18:59:13.716096       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:59:13.716180       1 main.go:301] handling current node
	I1017 18:59:23.716216       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:59:23.716254       1 main.go:301] handling current node
	I1017 18:59:33.715359       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:59:33.715392       1 main.go:301] handling current node
	I1017 18:59:43.716237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 18:59:43.716342       1 main.go:301] handling current node
	
	
	==> kube-apiserver [612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583] <==
	I1017 18:57:49.205105       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1017 18:57:49.271410       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.101.222.237"}
	W1017 18:57:49.682567       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 18:57:49.697402       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1017 18:57:52.471904       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.157.28"}
	W1017 18:58:11.732867       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 18:58:11.747274       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 18:58:11.774244       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1017 18:58:11.788338       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1017 18:58:24.198864       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.157.28:443: connect: connection refused
	E1017 18:58:24.199008       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.157.28:443: connect: connection refused" logger="UnhandledError"
	W1017 18:58:24.199543       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.157.28:443: connect: connection refused
	E1017 18:58:24.199627       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.157.28:443: connect: connection refused" logger="UnhandledError"
	W1017 18:58:24.265259       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.157.28:443: connect: connection refused
	E1017 18:58:24.265318       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.157.28:443: connect: connection refused" logger="UnhandledError"
	W1017 18:58:39.900743       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 18:58:39.900812       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1017 18:58:39.901775       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.6.125:443: connect: connection refused" logger="UnhandledError"
	E1017 18:58:39.902294       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.6.125:443: connect: connection refused" logger="UnhandledError"
	E1017 18:58:39.908674       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.6.125:443: connect: connection refused" logger="UnhandledError"
	E1017 18:58:39.929939       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.6.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.6.125:443: connect: connection refused" logger="UnhandledError"
	I1017 18:58:40.107464       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569] <==
	I1017 18:57:41.724356       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 18:57:41.724371       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 18:57:41.725470       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 18:57:41.725563       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 18:57:41.728674       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 18:57:41.739102       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 18:57:41.745369       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 18:57:41.748575       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 18:57:41.748625       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 18:57:41.749734       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 18:57:41.750865       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 18:57:41.750907       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 18:57:41.750958       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 18:57:41.754367       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 18:57:41.755555       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 18:57:41.758125       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	E1017 18:57:48.072022       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1017 18:58:11.726017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1017 18:58:11.726189       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1017 18:58:11.726252       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1017 18:58:11.761823       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1017 18:58:11.765940       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1017 18:58:11.826702       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 18:58:11.867698       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 18:58:26.679271       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001] <==
	I1017 18:57:43.700036       1 server_linux.go:53] "Using iptables proxy"
	I1017 18:57:43.825351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 18:57:43.926061       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 18:57:43.926100       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 18:57:43.926178       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 18:57:43.980396       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 18:57:43.980452       1 server_linux.go:132] "Using iptables Proxier"
	I1017 18:57:43.984272       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 18:57:43.984570       1 server.go:527] "Version info" version="v1.34.1"
	I1017 18:57:43.984586       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 18:57:43.989847       1 config.go:200] "Starting service config controller"
	I1017 18:57:43.989885       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 18:57:43.989916       1 config.go:106] "Starting endpoint slice config controller"
	I1017 18:57:43.989921       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 18:57:43.989937       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 18:57:43.989941       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 18:57:43.990703       1 config.go:309] "Starting node config controller"
	I1017 18:57:43.990717       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 18:57:43.990724       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 18:57:44.090045       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 18:57:44.090086       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 18:57:44.090147       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2] <==
	I1017 18:57:35.877549       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 18:57:35.879841       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 18:57:35.879918       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 18:57:35.880880       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 18:57:35.880942       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 18:57:35.891557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 18:57:35.891780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 18:57:35.891862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 18:57:35.892606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 18:57:35.897236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 18:57:35.897419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 18:57:35.897473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 18:57:35.897560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 18:57:35.897634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 18:57:35.897685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 18:57:35.897751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 18:57:35.897787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 18:57:35.897817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 18:57:35.897874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 18:57:35.897929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 18:57:35.897967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 18:57:35.898058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 18:57:35.898097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 18:57:35.898893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1017 18:57:37.180719       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 18:59:11 addons-379549 kubelet[1304]: I1017 18:59:11.133158    1304 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="082e75cb2425af48aaf35d68a0c8da4b15c0aa44e96dfaa4ae2de6e8ca066681"
	Oct 17 18:59:12 addons-379549 kubelet[1304]: I1017 18:59:12.138417    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q985d" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 18:59:13 addons-379549 kubelet[1304]: I1017 18:59:13.143893    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q985d" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 18:59:13 addons-379549 kubelet[1304]: I1017 18:59:13.160591    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-q985d" podStartSLOduration=2.6110339 podStartE2EDuration="49.160514939s" podCreationTimestamp="2025-10-17 18:58:24 +0000 UTC" firstStartedPulling="2025-10-17 18:58:25.40660217 +0000 UTC m=+48.118967743" lastFinishedPulling="2025-10-17 18:59:11.956083209 +0000 UTC m=+94.668448782" observedRunningTime="2025-10-17 18:59:12.156590169 +0000 UTC m=+94.868955782" watchObservedRunningTime="2025-10-17 18:59:13.160514939 +0000 UTC m=+95.872880511"
	Oct 17 18:59:16 addons-379549 kubelet[1304]: I1017 18:59:16.478599    1304 scope.go:117] "RemoveContainer" containerID="a9549e13f8bbc36f5b17081cb1f8af6d5688fc956908733c4e613a9bf4103886"
	Oct 17 18:59:19 addons-379549 kubelet[1304]: I1017 18:59:19.173928    1304 scope.go:117] "RemoveContainer" containerID="a9549e13f8bbc36f5b17081cb1f8af6d5688fc956908733c4e613a9bf4103886"
	Oct 17 18:59:19 addons-379549 kubelet[1304]: I1017 18:59:19.198170    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-qx9b8" podStartSLOduration=44.848987118 podStartE2EDuration="1m31.198138774s" podCreationTimestamp="2025-10-17 18:57:48 +0000 UTC" firstStartedPulling="2025-10-17 18:58:32.54150487 +0000 UTC m=+55.253870443" lastFinishedPulling="2025-10-17 18:59:18.890656526 +0000 UTC m=+101.603022099" observedRunningTime="2025-10-17 18:59:19.188136305 +0000 UTC m=+101.900501903" watchObservedRunningTime="2025-10-17 18:59:19.198138774 +0000 UTC m=+101.910504355"
	Oct 17 18:59:19 addons-379549 kubelet[1304]: I1017 18:59:19.198710    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="local-path-storage/local-path-provisioner-648f6765c9-mtqnt" podStartSLOduration=44.636418633 podStartE2EDuration="1m32.198699564s" podCreationTimestamp="2025-10-17 18:57:47 +0000 UTC" firstStartedPulling="2025-10-17 18:58:25.408207273 +0000 UTC m=+48.120572845" lastFinishedPulling="2025-10-17 18:59:12.970488178 +0000 UTC m=+95.682853776" observedRunningTime="2025-10-17 18:59:13.161556232 +0000 UTC m=+95.873921805" watchObservedRunningTime="2025-10-17 18:59:19.198699564 +0000 UTC m=+101.911065145"
	Oct 17 18:59:20 addons-379549 kubelet[1304]: I1017 18:59:20.536347    1304 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9cnr\" (UniqueName: \"kubernetes.io/projected/2315ad5f-78a9-4ebc-a7b7-64b37e3df2b5-kube-api-access-v9cnr\") pod \"2315ad5f-78a9-4ebc-a7b7-64b37e3df2b5\" (UID: \"2315ad5f-78a9-4ebc-a7b7-64b37e3df2b5\") "
	Oct 17 18:59:20 addons-379549 kubelet[1304]: I1017 18:59:20.542847    1304 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2315ad5f-78a9-4ebc-a7b7-64b37e3df2b5-kube-api-access-v9cnr" (OuterVolumeSpecName: "kube-api-access-v9cnr") pod "2315ad5f-78a9-4ebc-a7b7-64b37e3df2b5" (UID: "2315ad5f-78a9-4ebc-a7b7-64b37e3df2b5"). InnerVolumeSpecName "kube-api-access-v9cnr". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 17 18:59:20 addons-379549 kubelet[1304]: I1017 18:59:20.637567    1304 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v9cnr\" (UniqueName: \"kubernetes.io/projected/2315ad5f-78a9-4ebc-a7b7-64b37e3df2b5-kube-api-access-v9cnr\") on node \"addons-379549\" DevicePath \"\""
	Oct 17 18:59:21 addons-379549 kubelet[1304]: I1017 18:59:21.186793    1304 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b3ae1490ba28e660c036a0bb1ef9a8d31c260af5fb2c5fff0bc817c49c17b9"
	Oct 17 18:59:23 addons-379549 kubelet[1304]: I1017 18:59:23.228311    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-9vfvf" podStartSLOduration=65.261323848 podStartE2EDuration="1m35.22828991s" podCreationTimestamp="2025-10-17 18:57:48 +0000 UTC" firstStartedPulling="2025-10-17 18:58:52.746110316 +0000 UTC m=+75.458475889" lastFinishedPulling="2025-10-17 18:59:22.713076305 +0000 UTC m=+105.425441951" observedRunningTime="2025-10-17 18:59:23.228126124 +0000 UTC m=+105.940491705" watchObservedRunningTime="2025-10-17 18:59:23.22828991 +0000 UTC m=+105.940655483"
	Oct 17 18:59:26 addons-379549 kubelet[1304]: I1017 18:59:26.621336    1304 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 17 18:59:26 addons-379549 kubelet[1304]: I1017 18:59:26.621389    1304 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 17 18:59:28 addons-379549 kubelet[1304]: E1017 18:59:28.211107    1304 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 17 18:59:28 addons-379549 kubelet[1304]: E1017 18:59:28.211196    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e0457e-0841-4658-b957-473746bb21d1-gcr-creds podName:26e0457e-0841-4658-b957-473746bb21d1 nodeName:}" failed. No retries permitted until 2025-10-17 19:00:32.21117783 +0000 UTC m=+174.923543402 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/26e0457e-0841-4658-b957-473746bb21d1-gcr-creds") pod "registry-creds-764b6fb674-v5s46" (UID: "26e0457e-0841-4658-b957-473746bb21d1") : secret "registry-creds-gcr" not found
	Oct 17 18:59:30 addons-379549 kubelet[1304]: I1017 18:59:30.259839    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-dnj9h" podStartSLOduration=2.10691825 podStartE2EDuration="1m6.259820696s" podCreationTimestamp="2025-10-17 18:58:24 +0000 UTC" firstStartedPulling="2025-10-17 18:58:25.200883516 +0000 UTC m=+47.913249088" lastFinishedPulling="2025-10-17 18:59:29.353785961 +0000 UTC m=+112.066151534" observedRunningTime="2025-10-17 18:59:30.256085057 +0000 UTC m=+112.968450646" watchObservedRunningTime="2025-10-17 18:59:30.259820696 +0000 UTC m=+112.972186269"
	Oct 17 18:59:33 addons-379549 kubelet[1304]: I1017 18:59:33.274294    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-4z5sp" podStartSLOduration=97.400912965 podStartE2EDuration="1m41.274277866s" podCreationTimestamp="2025-10-17 18:57:52 +0000 UTC" firstStartedPulling="2025-10-17 18:59:28.490724333 +0000 UTC m=+111.203089905" lastFinishedPulling="2025-10-17 18:59:32.364089233 +0000 UTC m=+115.076454806" observedRunningTime="2025-10-17 18:59:33.272911428 +0000 UTC m=+115.985277009" watchObservedRunningTime="2025-10-17 18:59:33.274277866 +0000 UTC m=+115.986643455"
	Oct 17 18:59:36 addons-379549 kubelet[1304]: I1017 18:59:36.076612    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/34227125-3da8-44cc-bbcf-a3085cf718b7-gcp-creds\") pod \"busybox\" (UID: \"34227125-3da8-44cc-bbcf-a3085cf718b7\") " pod="default/busybox"
	Oct 17 18:59:36 addons-379549 kubelet[1304]: I1017 18:59:36.077228    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwhmh\" (UniqueName: \"kubernetes.io/projected/34227125-3da8-44cc-bbcf-a3085cf718b7-kube-api-access-bwhmh\") pod \"busybox\" (UID: \"34227125-3da8-44cc-bbcf-a3085cf718b7\") " pod="default/busybox"
	Oct 17 18:59:37 addons-379549 kubelet[1304]: E1017 18:59:37.579647    1304 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5eca3709930011272d4d06fb0d7e18136d0c10c96b662ff44365243de076e940/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5eca3709930011272d4d06fb0d7e18136d0c10c96b662ff44365243de076e940/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-admission-patch-5dn9f_4dc2db8b-f511-4493-b206-5f5634a796c5/patch/1.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-5dn9f_4dc2db8b-f511-4493-b206-5f5634a796c5/patch/1.log: no such file or directory
	Oct 17 18:59:39 addons-379549 kubelet[1304]: I1017 18:59:39.295040    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.384231419 podStartE2EDuration="3.295023231s" podCreationTimestamp="2025-10-17 18:59:36 +0000 UTC" firstStartedPulling="2025-10-17 18:59:36.363120122 +0000 UTC m=+119.075485695" lastFinishedPulling="2025-10-17 18:59:38.273911934 +0000 UTC m=+120.986277507" observedRunningTime="2025-10-17 18:59:39.293233485 +0000 UTC m=+122.005599066" watchObservedRunningTime="2025-10-17 18:59:39.295023231 +0000 UTC m=+122.007388812"
	Oct 17 18:59:41 addons-379549 kubelet[1304]: I1017 18:59:41.478546    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lggv9" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 18:59:41 addons-379549 kubelet[1304]: I1017 18:59:41.481136    1304 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c167be9f-c210-41ff-a2a0-7731a2d0c3da" path="/var/lib/kubelet/pods/c167be9f-c210-41ff-a2a0-7731a2d0c3da/volumes"
	
	
	==> storage-provisioner [c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11] <==
	W1017 18:59:23.736427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:25.740502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:25.747933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:27.750666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:27.756300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:29.759550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:29.763830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:31.768469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:31.774993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:33.777841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:33.782250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:35.785328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:35.789890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:37.793626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:37.802744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:39.806218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:39.810763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:41.813902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:41.818184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:43.820789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:43.825487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:45.829040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:45.834304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:47.855377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 18:59:47.863108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-379549 -n addons-379549
helpers_test.go:269: (dbg) Run:  kubectl --context addons-379549 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-patch-gzmf4 ingress-nginx-admission-create-x76j5 ingress-nginx-admission-patch-5dn9f registry-creds-764b6fb674-v5s46
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-379549 describe pod gcp-auth-certs-patch-gzmf4 ingress-nginx-admission-create-x76j5 ingress-nginx-admission-patch-5dn9f registry-creds-764b6fb674-v5s46
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-379549 describe pod gcp-auth-certs-patch-gzmf4 ingress-nginx-admission-create-x76j5 ingress-nginx-admission-patch-5dn9f registry-creds-764b6fb674-v5s46: exit status 1 (96.027935ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-gzmf4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-x76j5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5dn9f" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-v5s46" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-379549 describe pod gcp-auth-certs-patch-gzmf4 ingress-nginx-admission-create-x76j5 ingress-nginx-admission-patch-5dn9f registry-creds-764b6fb674-v5s46: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable headlamp --alsologtostderr -v=1: exit status 11 (260.740565ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 18:59:49.062389  267040 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:59:49.063277  267040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:49.063285  267040 out.go:374] Setting ErrFile to fd 2...
	I1017 18:59:49.063291  267040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:59:49.063561  267040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 18:59:49.063845  267040 mustload.go:65] Loading cluster: addons-379549
	I1017 18:59:49.064827  267040 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:49.064881  267040 addons.go:606] checking whether the cluster is paused
	I1017 18:59:49.065357  267040 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 18:59:49.065405  267040 host.go:66] Checking if "addons-379549" exists ...
	I1017 18:59:49.065940  267040 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 18:59:49.083409  267040 ssh_runner.go:195] Run: systemctl --version
	I1017 18:59:49.083464  267040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 18:59:49.105667  267040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 18:59:49.210726  267040 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 18:59:49.210808  267040 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 18:59:49.245510  267040 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 18:59:49.245533  267040 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 18:59:49.245539  267040 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 18:59:49.245543  267040 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 18:59:49.245547  267040 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 18:59:49.245550  267040 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 18:59:49.245553  267040 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 18:59:49.245556  267040 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 18:59:49.245564  267040 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 18:59:49.245570  267040 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 18:59:49.245574  267040 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 18:59:49.245578  267040 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 18:59:49.245581  267040 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 18:59:49.245608  267040 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 18:59:49.245615  267040 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 18:59:49.245621  267040 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 18:59:49.245625  267040 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 18:59:49.245630  267040 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 18:59:49.245633  267040 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 18:59:49.245636  267040 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 18:59:49.245646  267040 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 18:59:49.245653  267040 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 18:59:49.245656  267040 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 18:59:49.245659  267040 cri.go:89] found id: ""
	I1017 18:59:49.245706  267040 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 18:59:49.259954  267040 out.go:203] 
	W1017 18:59:49.262897  267040 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T18:59:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 18:59:49.262918  267040 out.go:285] * 
	* 
	W1017 18:59:49.268960  267040 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 18:59:49.271933  267040 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.22s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-9vn6g" [a4f8a12e-4a35-4a8d-89d7-9ccb0deb1211] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003769906s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (268.455909ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:01:01.969887  268939 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:01:01.970713  268939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:01:01.970746  268939 out.go:374] Setting ErrFile to fd 2...
	I1017 19:01:01.970754  268939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:01:01.972828  268939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:01:01.973285  268939 mustload.go:65] Loading cluster: addons-379549
	I1017 19:01:01.973727  268939 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:01:01.973769  268939 addons.go:606] checking whether the cluster is paused
	I1017 19:01:01.973914  268939 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:01:01.973954  268939 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:01:01.974455  268939 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:01:01.996751  268939 ssh_runner.go:195] Run: systemctl --version
	I1017 19:01:01.996820  268939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:01:02.022950  268939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:01:02.127315  268939 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:01:02.127416  268939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:01:02.157693  268939 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:01:02.157727  268939 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:01:02.157732  268939 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:01:02.157736  268939 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:01:02.157740  268939 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:01:02.157761  268939 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:01:02.157771  268939 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:01:02.157774  268939 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:01:02.157778  268939 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:01:02.157786  268939 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:01:02.157792  268939 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:01:02.157796  268939 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:01:02.157800  268939 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:01:02.157804  268939 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:01:02.157808  268939 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:01:02.157813  268939 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:01:02.157819  268939 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:01:02.157834  268939 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:01:02.157843  268939 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:01:02.157846  268939 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:01:02.157852  268939 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:01:02.157856  268939 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:01:02.157863  268939 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:01:02.157867  268939 cri.go:89] found id: ""
	I1017 19:01:02.157933  268939 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:01:02.171907  268939 out.go:203] 
	W1017 19:01:02.173184  268939 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:01:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:01:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:01:02.173209  268939 out.go:285] * 
	* 
	W1017 19:01:02.179358  268939 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:01:02.180804  268939 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-379549 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-379549 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-379549 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [e3f6937f-b522-41fd-8e20-b6533210a8d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [e3f6937f-b522-41fd-8e20-b6533210a8d7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [e3f6937f-b522-41fd-8e20-b6533210a8d7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003726414s
addons_test.go:967: (dbg) Run:  kubectl --context addons-379549 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 ssh "cat /opt/local-path-provisioner/pvc-5684922c-aed9-497d-9bbf-0e02c327a0d2_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-379549 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-379549 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (267.358309ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:56.686237  268836 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:56.687060  268836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:56.687112  268836 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:56.687133  268836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:56.687428  268836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:00:56.687777  268836 mustload.go:65] Loading cluster: addons-379549
	I1017 19:00:56.688181  268836 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:56.688224  268836 addons.go:606] checking whether the cluster is paused
	I1017 19:00:56.688356  268836 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:56.688414  268836 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:00:56.688985  268836 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:00:56.706838  268836 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:56.706896  268836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:00:56.728772  268836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:00:56.831650  268836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:56.831747  268836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:56.868186  268836 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:00:56.868205  268836 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:00:56.868209  268836 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:00:56.868217  268836 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:00:56.868220  268836 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:00:56.868224  268836 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:00:56.868227  268836 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:00:56.868230  268836 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:00:56.868233  268836 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:00:56.868239  268836 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:00:56.868242  268836 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:00:56.868245  268836 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:00:56.868248  268836 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:00:56.868251  268836 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:00:56.868255  268836 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:00:56.868259  268836 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:00:56.868262  268836 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:00:56.868265  268836 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:00:56.868269  268836 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:00:56.868272  268836 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:00:56.868276  268836 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:00:56.868279  268836 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:00:56.868282  268836 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:00:56.868285  268836 cri.go:89] found id: ""
	I1017 19:00:56.868339  268836 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:56.891184  268836 out.go:203] 
	W1017 19:00:56.892411  268836 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:56.892487  268836 out.go:285] * 
	* 
	W1017 19:00:56.898682  268836 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:56.901292  268836 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.35s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5tz6p" [379ab14e-3f5a-4e60-a28a-563f7f5de7af] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003233359s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (293.517905ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:42.017363  268464 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:42.018239  268464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:42.018272  268464 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:42.018280  268464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:42.018702  268464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:00:42.019125  268464 mustload.go:65] Loading cluster: addons-379549
	I1017 19:00:42.019582  268464 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:42.019659  268464 addons.go:606] checking whether the cluster is paused
	I1017 19:00:42.019786  268464 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:42.019806  268464 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:00:42.020374  268464 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:00:42.042256  268464 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:42.042334  268464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:00:42.061666  268464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:00:42.177008  268464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:42.177154  268464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:42.218222  268464 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:00:42.218254  268464 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:00:42.218261  268464 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:00:42.218265  268464 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:00:42.218269  268464 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:00:42.218273  268464 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:00:42.218276  268464 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:00:42.218280  268464 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:00:42.218284  268464 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:00:42.218291  268464 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:00:42.218295  268464 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:00:42.218298  268464 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:00:42.218301  268464 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:00:42.218304  268464 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:00:42.218308  268464 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:00:42.218315  268464 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:00:42.218319  268464 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:00:42.218330  268464 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:00:42.218335  268464 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:00:42.218338  268464 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:00:42.218343  268464 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:00:42.218347  268464 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:00:42.218350  268464 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:00:42.218355  268464 cri.go:89] found id: ""
	I1017 19:00:42.218416  268464 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:42.239657  268464 out.go:203] 
	W1017 19:00:42.240840  268464 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:42.240872  268464 out.go:285] * 
	* 
	W1017 19:00:42.248039  268464 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:42.250545  268464 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-pk6pq" [9747ae27-ac83-406a-bc46-9c4c6a39512c] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.012203256s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-379549 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-379549 addons disable yakd --alsologtostderr -v=1: exit status 11 (284.227968ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:00:48.349268  268546 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:00:48.350580  268546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:48.350631  268546 out.go:374] Setting ErrFile to fd 2...
	I1017 19:00:48.350652  268546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:00:48.350994  268546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:00:48.351562  268546 mustload.go:65] Loading cluster: addons-379549
	I1017 19:00:48.352213  268546 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:48.352272  268546 addons.go:606] checking whether the cluster is paused
	I1017 19:00:48.352445  268546 config.go:182] Loaded profile config "addons-379549": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:00:48.352486  268546 host.go:66] Checking if "addons-379549" exists ...
	I1017 19:00:48.353117  268546 cli_runner.go:164] Run: docker container inspect addons-379549 --format={{.State.Status}}
	I1017 19:00:48.373476  268546 ssh_runner.go:195] Run: systemctl --version
	I1017 19:00:48.373539  268546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-379549
	I1017 19:00:48.393457  268546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/addons-379549/id_rsa Username:docker}
	I1017 19:00:48.495083  268546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:00:48.495187  268546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:00:48.526291  268546 cri.go:89] found id: "5cf24bffa8a4abae885a44b533000299393dbf536f944868196b772da2ea935d"
	I1017 19:00:48.526314  268546 cri.go:89] found id: "80799fb75c9169389498ebfca9e8bd150dc22745bd39afd919de30736f993d78"
	I1017 19:00:48.526319  268546 cri.go:89] found id: "6fde7d0006c1aaf6e1954ddbde6bdf9af5d8e3650951bef9ba330e731274d207"
	I1017 19:00:48.526323  268546 cri.go:89] found id: "92b113c7cfe7940976d0561d7ffff8e1ec02e01f0dcc54cd8e589eabf32cc1b0"
	I1017 19:00:48.526326  268546 cri.go:89] found id: "5651bbb1546eae506067477cc633603ca2ac02a842f17e09ce6fe9a79ffa0e0e"
	I1017 19:00:48.526329  268546 cri.go:89] found id: "b06455475d2b37b302d9223e6cc497a0c417c77589f2ced0938ddbd1b2411306"
	I1017 19:00:48.526333  268546 cri.go:89] found id: "ce48b4c920d81fc27eaef5e1119f5ded186bb80b0f7da0544430a2c3fb4fc29a"
	I1017 19:00:48.526354  268546 cri.go:89] found id: "accf4579f8250f27038827ec1b315b311a306293af9ef176a69914469bb2353b"
	I1017 19:00:48.526364  268546 cri.go:89] found id: "fb1f7d0e065d8023e9546ae0a6a64fa04a57b0b47d3b44f594141de71b080618"
	I1017 19:00:48.526372  268546 cri.go:89] found id: "3986728e63c14c7fd277443687da324c568b58d749e701a217495bfa71741734"
	I1017 19:00:48.526375  268546 cri.go:89] found id: "88eee337e7ec6eae66159898b434ac7073a3200b04b237aec88ca3e25bdb2222"
	I1017 19:00:48.526378  268546 cri.go:89] found id: "012db353f99b6e2ef9ff8f6f38fdcfeb8ab14b588f53e8952b29395971f22d83"
	I1017 19:00:48.526382  268546 cri.go:89] found id: "9361ebb005625fb2ad3d70ee0ecdfc71f800630500b97f40a602782e074bb2c4"
	I1017 19:00:48.526385  268546 cri.go:89] found id: "de5165e5bfa9f6277e7973043a69fcf80ecd76150ce5c7fc069314ed88054ea7"
	I1017 19:00:48.526388  268546 cri.go:89] found id: "37d41037f4ee9382157bc059bf46e949eab3051aeb71edbb106837671cf3e24a"
	I1017 19:00:48.526397  268546 cri.go:89] found id: "c83ac4cff13e7be5a7a592b7ef3ad2c0dc7e4d780b6863448ea34fc512f98e11"
	I1017 19:00:48.526404  268546 cri.go:89] found id: "70437ef1453701665ef3d63f7f7a1d3affd361ef34251a1b4b2f6c5615248d1b"
	I1017 19:00:48.526410  268546 cri.go:89] found id: "0c926298efaa60b8e6e7e23cbd555e5271a4b331186cbf064b8a06a84c92da02"
	I1017 19:00:48.526413  268546 cri.go:89] found id: "ad27f04cf6a14e6b40d51c3fe333d53a8ebaf1685edb0d71d7e089c7f96b8001"
	I1017 19:00:48.526416  268546 cri.go:89] found id: "22a266e5672abf5ca502cdbd17cb99d63f6b55ce0cb5a206303cec2167f7d569"
	I1017 19:00:48.526441  268546 cri.go:89] found id: "beb0486de70d8e5dc49e7b06450eb1df72f27a30d1a116fcef4687a1229bab02"
	I1017 19:00:48.526445  268546 cri.go:89] found id: "04fd09957b07ce3e283a4d21b3fd7e87d3b47d90a25d55656735805959496cf2"
	I1017 19:00:48.526448  268546 cri.go:89] found id: "612fc65e5e8667898a174c79ca2be5a8ae8041623681c350e5ee77608e36c583"
	I1017 19:00:48.526451  268546 cri.go:89] found id: ""
	I1017 19:00:48.526514  268546 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 19:00:48.539710  268546 out.go:203] 
	W1017 19:00:48.541002  268546 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:00:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 19:00:48.541029  268546 out.go:285] * 
	* 
	W1017 19:00:48.546927  268546 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 19:00:48.548293  268546 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-379549 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-998954 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-998954 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-8w4tw" [9d5dbf59-f69f-440d-a57a-8843ec8ee49b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-998954 -n functional-998954
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-17 19:16:57.610970167 +0000 UTC m=+1226.780420225
functional_test.go:1645: (dbg) Run:  kubectl --context functional-998954 describe po hello-node-connect-7d85dfc575-8w4tw -n default
functional_test.go:1645: (dbg) kubectl --context functional-998954 describe po hello-node-connect-7d85dfc575-8w4tw -n default:
Name:             hello-node-connect-7d85dfc575-8w4tw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-998954/192.168.49.2
Start Time:       Fri, 17 Oct 2025 19:06:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-prrbl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-prrbl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-8w4tw to functional-998954
Normal   Pulling    7m4s (x5 over 9m58s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m57s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m57s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-998954 logs hello-node-connect-7d85dfc575-8w4tw -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-998954 logs hello-node-connect-7d85dfc575-8w4tw -n default: exit status 1 (96.972792ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-8w4tw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-998954 logs hello-node-connect-7d85dfc575-8w4tw -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-998954 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-8w4tw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-998954/192.168.49.2
Start Time:       Fri, 17 Oct 2025 19:06:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-prrbl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-prrbl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-8w4tw to functional-998954
Normal   Pulling    7m4s (x5 over 9m58s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m57s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m57s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-998954 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-998954 logs -l app=hello-node-connect: exit status 1 (88.415136ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-8w4tw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-998954 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-998954 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.18.233
IPs:                      10.96.18.233
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32701/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-998954
helpers_test.go:243: (dbg) docker inspect functional-998954:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0513b8de7122e65b93fbda25ae0bc73b5fa310ad92098902c30ced4f87a24e8b",
	        "Created": "2025-10-17T19:04:04.041309389Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 275341,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:04:04.105095754Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/0513b8de7122e65b93fbda25ae0bc73b5fa310ad92098902c30ced4f87a24e8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0513b8de7122e65b93fbda25ae0bc73b5fa310ad92098902c30ced4f87a24e8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/0513b8de7122e65b93fbda25ae0bc73b5fa310ad92098902c30ced4f87a24e8b/hosts",
	        "LogPath": "/var/lib/docker/containers/0513b8de7122e65b93fbda25ae0bc73b5fa310ad92098902c30ced4f87a24e8b/0513b8de7122e65b93fbda25ae0bc73b5fa310ad92098902c30ced4f87a24e8b-json.log",
	        "Name": "/functional-998954",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-998954:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-998954",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0513b8de7122e65b93fbda25ae0bc73b5fa310ad92098902c30ced4f87a24e8b",
	                "LowerDir": "/var/lib/docker/overlay2/be008f425efcf1d87756a8ea5ca84de669c38cbba14892fae379dcbd09393447-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be008f425efcf1d87756a8ea5ca84de669c38cbba14892fae379dcbd09393447/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be008f425efcf1d87756a8ea5ca84de669c38cbba14892fae379dcbd09393447/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be008f425efcf1d87756a8ea5ca84de669c38cbba14892fae379dcbd09393447/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-998954",
	                "Source": "/var/lib/docker/volumes/functional-998954/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-998954",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-998954",
	                "name.minikube.sigs.k8s.io": "functional-998954",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11221daf59180d8d89958e964c8918f8fbde17666a59689e3487cec384560c73",
	            "SandboxKey": "/var/run/docker/netns/11221daf5918",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-998954": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:93:cd:5e:89:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a3f0842db5f562b7748b2d9753cad1c63c1d8592726907f20e02eb2c2bb7b114",
	                    "EndpointID": "a729681875c5897c485e8abddf1467a411c55b4fb901b4265c4108844a4e384f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-998954",
	                        "0513b8de7122"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-998954 -n functional-998954
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 logs -n 25: (1.418744405s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-998954 ssh sudo cat /etc/ssl/certs/2595962.pem                                                                                                 │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ image   │ functional-998954 image ls                                                                                                                                │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ ssh     │ functional-998954 ssh sudo cat /usr/share/ca-certificates/2595962.pem                                                                                     │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ image   │ functional-998954 image load --daemon kicbase/echo-server:functional-998954 --alsologtostderr                                                             │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ ssh     │ functional-998954 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ ssh     │ functional-998954 ssh sudo cat /etc/test/nested/copy/259596/hosts                                                                                         │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ image   │ functional-998954 image ls                                                                                                                                │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ cp      │ functional-998954 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                        │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ ssh     │ functional-998954 ssh -n functional-998954 sudo cat /home/docker/cp-test.txt                                                                              │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ image   │ functional-998954 image load --daemon kicbase/echo-server:functional-998954 --alsologtostderr                                                             │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ cp      │ functional-998954 cp functional-998954:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd615479210/001/cp-test.txt                                 │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ ssh     │ functional-998954 ssh -n functional-998954 sudo cat /home/docker/cp-test.txt                                                                              │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ image   │ functional-998954 image ls                                                                                                                                │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ cp      │ functional-998954 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                 │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ image   │ functional-998954 image save kicbase/echo-server:functional-998954 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ ssh     │ functional-998954 ssh -n functional-998954 sudo cat /tmp/does/not/exist/cp-test.txt                                                                       │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ image   │ functional-998954 image rm kicbase/echo-server:functional-998954 --alsologtostderr                                                                        │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ image   │ functional-998954 image ls                                                                                                                                │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ image   │ functional-998954 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ image   │ functional-998954 image save --daemon kicbase/echo-server:functional-998954 --alsologtostderr                                                             │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ tunnel  │ functional-998954 tunnel --alsologtostderr                                                                                                                │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │                     │
	│ tunnel  │ functional-998954 tunnel --alsologtostderr                                                                                                                │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │                     │
	│ tunnel  │ functional-998954 tunnel --alsologtostderr                                                                                                                │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │                     │
	│ addons  │ functional-998954 addons list                                                                                                                             │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	│ addons  │ functional-998954 addons list -o json                                                                                                                     │ functional-998954 │ jenkins │ v1.37.0 │ 17 Oct 25 19:06 UTC │ 17 Oct 25 19:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:06:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:06:40.264705  281573 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:06:40.266060  281573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:40.266078  281573 out.go:374] Setting ErrFile to fd 2...
	I1017 19:06:40.266085  281573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:40.266387  281573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:06:40.267101  281573 out.go:368] Setting JSON to false
	I1017 19:06:40.268056  281573 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6551,"bootTime":1760721449,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:06:40.268136  281573 start.go:141] virtualization:  
	I1017 19:06:40.273800  281573 out.go:179] * [functional-998954] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:06:40.276793  281573 notify.go:220] Checking for updates...
	I1017 19:06:40.278242  281573 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:06:40.281281  281573 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:06:40.284295  281573 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:06:40.288210  281573 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:06:40.291126  281573 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:06:40.294094  281573 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:06:40.297905  281573 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:40.298655  281573 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:06:40.339931  281573 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:06:40.340051  281573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:06:40.413221  281573 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 19:06:40.402954347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:06:40.413328  281573 docker.go:318] overlay module found
	I1017 19:06:40.416451  281573 out.go:179] * Using the docker driver based on existing profile
	I1017 19:06:40.419341  281573 start.go:305] selected driver: docker
	I1017 19:06:40.419367  281573 start.go:925] validating driver "docker" against &{Name:functional-998954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-998954 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:40.419470  281573 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:06:40.419581  281573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:06:40.500603  281573 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 19:06:40.491325459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:06:40.501001  281573 cni.go:84] Creating CNI manager for ""
	I1017 19:06:40.501064  281573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:06:40.501105  281573 start.go:349] cluster config:
	{Name:functional-998954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-998954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:40.504653  281573 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 17 19:07:14 functional-998954 crio[3512]: time="2025-10-17T19:07:14.682626304Z" level=info msg="Checking pod default_hello-node-75c85bcc94-ldl88 for CNI network kindnet (type=ptp)"
	Oct 17 19:07:14 functional-998954 crio[3512]: time="2025-10-17T19:07:14.69022808Z" level=info msg="Ran pod sandbox a1d18ba29dae96320e93a75862e6ca5ee77a26e2b3698a67a2a499ca0acb5370 with infra container: default/hello-node-75c85bcc94-ldl88/POD" id=b1550e0d-5608-4834-9b63-e3dac3284eb7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 19:07:14 functional-998954 crio[3512]: time="2025-10-17T19:07:14.692932098Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=05611c53-8d68-43d2-b707-7cfd464d484b name=/runtime.v1.ImageService/PullImage
	Oct 17 19:07:14 functional-998954 crio[3512]: time="2025-10-17T19:07:14.708126124Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d66b811c-1804-4bc5-9fff-398381d98f51 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.680454425Z" level=info msg="Stopping pod sandbox: 263e7ab2484db594056ab4a1e37bba5eb14a77d50e28c27bd2191bddea9b4c64" id=28a6b088-0453-44e5-8f73-9d0bd307664e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.680510718Z" level=info msg="Stopped pod sandbox (already stopped): 263e7ab2484db594056ab4a1e37bba5eb14a77d50e28c27bd2191bddea9b4c64" id=28a6b088-0453-44e5-8f73-9d0bd307664e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.681189549Z" level=info msg="Removing pod sandbox: 263e7ab2484db594056ab4a1e37bba5eb14a77d50e28c27bd2191bddea9b4c64" id=3f0f569b-220a-4565-800f-5ab65b3eeb1b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.684940043Z" level=info msg="Removed pod sandbox: 263e7ab2484db594056ab4a1e37bba5eb14a77d50e28c27bd2191bddea9b4c64" id=3f0f569b-220a-4565-800f-5ab65b3eeb1b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.685713271Z" level=info msg="Stopping pod sandbox: d418e96f96c40ec0ee08a96fbf3c4772f5ae2717c61e8840494f4baaaf9e952b" id=24abb2af-fce8-48aa-9570-feb12d383cef name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.685762673Z" level=info msg="Stopped pod sandbox (already stopped): d418e96f96c40ec0ee08a96fbf3c4772f5ae2717c61e8840494f4baaaf9e952b" id=24abb2af-fce8-48aa-9570-feb12d383cef name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.686054357Z" level=info msg="Removing pod sandbox: d418e96f96c40ec0ee08a96fbf3c4772f5ae2717c61e8840494f4baaaf9e952b" id=5713814c-9ba3-4372-8de9-69c844ab5c50 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.689724393Z" level=info msg="Removed pod sandbox: d418e96f96c40ec0ee08a96fbf3c4772f5ae2717c61e8840494f4baaaf9e952b" id=5713814c-9ba3-4372-8de9-69c844ab5c50 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.690176408Z" level=info msg="Stopping pod sandbox: dc19dba05174e2df04a9e836efe5a13f58a65da9a00da40a63a413021c440e71" id=33e2a522-517f-479d-869f-c158a719a289 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.690221625Z" level=info msg="Stopped pod sandbox (already stopped): dc19dba05174e2df04a9e836efe5a13f58a65da9a00da40a63a413021c440e71" id=33e2a522-517f-479d-869f-c158a719a289 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.690528972Z" level=info msg="Removing pod sandbox: dc19dba05174e2df04a9e836efe5a13f58a65da9a00da40a63a413021c440e71" id=d06ae543-9e20-439f-bf4a-d823d0999eb8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:07:15 functional-998954 crio[3512]: time="2025-10-17T19:07:15.693925597Z" level=info msg="Removed pod sandbox: dc19dba05174e2df04a9e836efe5a13f58a65da9a00da40a63a413021c440e71" id=d06ae543-9e20-439f-bf4a-d823d0999eb8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 19:07:26 functional-998954 crio[3512]: time="2025-10-17T19:07:26.709180614Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9d33bac5-9e8e-4824-aafe-e3c5ae7726d1 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:07:41 functional-998954 crio[3512]: time="2025-10-17T19:07:41.708400661Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=10314a4a-22f0-480c-9020-25262b0dc4aa name=/runtime.v1.ImageService/PullImage
	Oct 17 19:07:52 functional-998954 crio[3512]: time="2025-10-17T19:07:52.707908474Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7f84bc2a-abc1-4629-9db7-ce6958460639 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:08:30 functional-998954 crio[3512]: time="2025-10-17T19:08:30.708133402Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2ea753dc-ec8a-43ed-9d9c-f5e0b1ef4834 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:08:33 functional-998954 crio[3512]: time="2025-10-17T19:08:33.708424166Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=206bbf83-5d2a-4dad-968c-375c98552afc name=/runtime.v1.ImageService/PullImage
	Oct 17 19:09:53 functional-998954 crio[3512]: time="2025-10-17T19:09:53.707956945Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9d640112-80e4-443d-b4f7-6bfa113ae13f name=/runtime.v1.ImageService/PullImage
	Oct 17 19:10:08 functional-998954 crio[3512]: time="2025-10-17T19:10:08.708596835Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fbeb50c0-fa13-4a6a-b1f3-2c5e834f074b name=/runtime.v1.ImageService/PullImage
	Oct 17 19:12:41 functional-998954 crio[3512]: time="2025-10-17T19:12:41.708054997Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=af771679-e7f5-4d66-92e7-f0debe3afe68 name=/runtime.v1.ImageService/PullImage
	Oct 17 19:12:54 functional-998954 crio[3512]: time="2025-10-17T19:12:54.70835466Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=710b8b06-af85-4444-a9b6-3a1fd7eabc37 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c472459ad11ea       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   26a4deb066c91       sp-pod                                      default
	a02c3c1b1a4fc       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   32848e5cf5168       nginx-svc                                   default
	eff3e96e64e21       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   508de128b0de9       kube-proxy-74xrn                            kube-system
	4ec1aa754745b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   c5ea9006cf62f       storage-provisioner                         kube-system
	1df28ee557c97       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   3871bef7d18c7       kindnet-dwqs8                               kube-system
	538c8666bb3d9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   42eb1aa4a15f4       coredns-66bc5c9577-wklt6                    kube-system
	f159ce8307643       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   076cc175f2ad4       kube-apiserver-functional-998954            kube-system
	b5cd1b23acfdb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   6c013e1f3d86d       kube-scheduler-functional-998954            kube-system
	e6407232fcae0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   2d88d425d394e       kube-controller-manager-functional-998954   kube-system
	cde11a4aaf1e3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   6b7a88ebf8833       etcd-functional-998954                      kube-system
	fc94c2af292e8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   2d88d425d394e       kube-controller-manager-functional-998954   kube-system
	635f71f8965ad       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   6b7a88ebf8833       etcd-functional-998954                      kube-system
	332cbdef6155b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   42eb1aa4a15f4       coredns-66bc5c9577-wklt6                    kube-system
	60962638bee7a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   c5ea9006cf62f       storage-provisioner                         kube-system
	ca632b64de347       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   6c013e1f3d86d       kube-scheduler-functional-998954            kube-system
	1a6ec111b937e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   508de128b0de9       kube-proxy-74xrn                            kube-system
	e5bf263d382a1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   3871bef7d18c7       kindnet-dwqs8                               kube-system
	
	
	==> coredns [332cbdef6155b8184d2d0ef134e43c8db23ccf5c3fd4acfa1d96d98f6ec2cdb4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56820 - 39750 "HINFO IN 5793439956017113766.6553079860477946755. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.054005192s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [538c8666bb3d9b6540eced60fb029fbe62ee6251b053e4983033f411b677c285] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60673 - 63897 "HINFO IN 6124939951356945895.7583829489019977301. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017848292s
	
	
	==> describe nodes <==
	Name:               functional-998954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-998954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=functional-998954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_04_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:04:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-998954
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:16:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:16:42 +0000   Fri, 17 Oct 2025 19:04:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:16:42 +0000   Fri, 17 Oct 2025 19:04:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:16:42 +0000   Fri, 17 Oct 2025 19:04:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:16:42 +0000   Fri, 17 Oct 2025 19:05:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-998954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                5f1d2a4c-4ecc-40bc-b95b-52312c17b7bd
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-ldl88                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  default                     hello-node-connect-7d85dfc575-8w4tw          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-wklt6                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-998954                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-dwqs8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-998954             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-998954    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-74xrn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-998954             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-998954 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-998954 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-998954 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-998954 event: Registered Node functional-998954 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-998954 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-998954 event: Registered Node functional-998954 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-998954 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-998954 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-998954 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-998954 event: Registered Node functional-998954 in Controller
	
	
	==> dmesg <==
	[ +27.630815] overlayfs: idmapped layers are currently not supported
	[ +17.813448] overlayfs: idmapped layers are currently not supported
	[Oct17 18:24] overlayfs: idmapped layers are currently not supported
	[ +30.872028] overlayfs: idmapped layers are currently not supported
	[Oct17 18:25] overlayfs: idmapped layers are currently not supported
	[Oct17 18:27] overlayfs: idmapped layers are currently not supported
	[Oct17 18:29] overlayfs: idmapped layers are currently not supported
	[Oct17 18:30] overlayfs: idmapped layers are currently not supported
	[Oct17 18:31] overlayfs: idmapped layers are currently not supported
	[  +9.357480] overlayfs: idmapped layers are currently not supported
	[Oct17 18:33] overlayfs: idmapped layers are currently not supported
	[  +5.779853] overlayfs: idmapped layers are currently not supported
	[Oct17 18:34] overlayfs: idmapped layers are currently not supported
	[Oct17 18:35] overlayfs: idmapped layers are currently not supported
	[Oct17 18:36] overlayfs: idmapped layers are currently not supported
	[ +20.850590] overlayfs: idmapped layers are currently not supported
	[Oct17 18:38] overlayfs: idmapped layers are currently not supported
	[ +19.812679] overlayfs: idmapped layers are currently not supported
	[Oct17 18:39] overlayfs: idmapped layers are currently not supported
	[ +19.225178] overlayfs: idmapped layers are currently not supported
	[Oct17 18:40] overlayfs: idmapped layers are currently not supported
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 18:57] overlayfs: idmapped layers are currently not supported
	[Oct17 19:03] overlayfs: idmapped layers are currently not supported
	[Oct17 19:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [635f71f8965ad4dd589b2e987975bba153bfbf788d15b05b453ca431b20de777] <==
	{"level":"warn","ts":"2025-10-17T19:05:34.436723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:05:34.466403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:05:34.495462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:05:34.522423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:05:34.548005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:05:34.566642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:05:34.681670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60718","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:05:58.883598Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-17T19:05:58.883847Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-998954","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-17T19:05:58.884176Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T19:05:59.021584Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T19:05:59.021767Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:05:59.021833Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-17T19:05:59.021838Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-17T19:05:59.021905Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-17T19:05:59.021935Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-17T19:05:59.021913Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T19:05:59.021979Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-17T19:05:59.021895Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T19:05:59.022055Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T19:05:59.022085Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:05:59.025640Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-17T19:05:59.025718Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:05:59.025764Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-17T19:05:59.025775Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-998954","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [cde11a4aaf1e3c0756035912e6de5aa483d167123933bcb8863eeadbd5f734dd] <==
	{"level":"warn","ts":"2025-10-17T19:06:18.569483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.588458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.605381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.622185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.645622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.652355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.678264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.707587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.718044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.732590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.752154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.769536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.787209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.805861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.824033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.849821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.896717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.921597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.950137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.979310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:18.993467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:06:19.076570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54218","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T19:16:17.522669Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1090}
	{"level":"info","ts":"2025-10-17T19:16:17.547300Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1090,"took":"24.089412ms","hash":3289346046,"current-db-size-bytes":3125248,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1335296,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-17T19:16:17.547357Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3289346046,"revision":1090,"compact-revision":-1}
	
	
	==> kernel <==
	 19:16:59 up  1:59,  0 user,  load average: 0.21, 0.41, 0.79
	Linux functional-998954 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1df28ee557c97a97d36f18b50b4e4155f3c631513c86893b570289bcda32b9bf] <==
	I1017 19:14:51.408905       1 main.go:301] handling current node
	I1017 19:15:01.408684       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:15:01.408722       1 main.go:301] handling current node
	I1017 19:15:11.404018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:15:11.404053       1 main.go:301] handling current node
	I1017 19:15:21.407342       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:15:21.407452       1 main.go:301] handling current node
	I1017 19:15:31.410921       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:15:31.410966       1 main.go:301] handling current node
	I1017 19:15:41.405490       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:15:41.405819       1 main.go:301] handling current node
	I1017 19:15:51.404222       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:15:51.404290       1 main.go:301] handling current node
	I1017 19:16:01.406861       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:16:01.406902       1 main.go:301] handling current node
	I1017 19:16:11.404554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:16:11.404660       1 main.go:301] handling current node
	I1017 19:16:21.412455       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:16:21.412577       1 main.go:301] handling current node
	I1017 19:16:31.404758       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:16:31.404791       1 main.go:301] handling current node
	I1017 19:16:41.404686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:16:41.404732       1 main.go:301] handling current node
	I1017 19:16:51.408627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:16:51.408663       1 main.go:301] handling current node
	
	
	==> kindnet [e5bf263d382a1a163ab963536b5b0d50c8ae47b00a5fe2ffb0233c127a784388] <==
	I1017 19:05:30.902569       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:05:30.909490       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1017 19:05:30.909651       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:05:30.909665       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:05:30.909677       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:05:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:05:31.146019       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:05:31.152606       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:05:31.152715       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:05:31.156320       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 19:05:35.856804       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:05:35.856828       1 metrics.go:72] Registering metrics
	I1017 19:05:35.856877       1 controller.go:711] "Syncing nftables rules"
	I1017 19:05:41.117217       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:05:41.117277       1 main.go:301] handling current node
	I1017 19:05:51.117083       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:05:51.117122       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f159ce8307643f6a90f9aeb81af90926ab164cf3eeccc72df5287222c7f2d948] <==
	I1017 19:06:19.820017       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 19:06:19.820102       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:06:19.821040       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:06:19.830618       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:06:19.853399       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:06:19.853568       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:06:19.860894       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:06:19.861012       1 policy_source.go:240] refreshing policies
	I1017 19:06:19.864450       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:06:20.639962       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:06:20.743063       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:06:21.737709       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:06:22.021620       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 19:06:22.103388       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 19:06:22.110770       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 19:06:23.492609       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:06:23.543002       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:06:23.594177       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:06:36.125324       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.162.159"}
	I1017 19:06:48.596031       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.196.122"}
	I1017 19:06:57.269769       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.18.233"}
	E1017 19:07:07.956863       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1017 19:07:14.244697       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:44954: use of closed network connection
	I1017 19:07:14.438439       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.35.74"}
	I1017 19:16:19.777597       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [e6407232fcae0df0dbe8f2ae3beb39b72d4236d00b6d0b171301f209685eed52] <==
	I1017 19:06:23.217165       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:06:23.220852       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 19:06:23.225549       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 19:06:23.226177       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:06:23.226211       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:06:23.230223       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:06:23.232408       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 19:06:23.235843       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:06:23.235883       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:06:23.235925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:06:23.236356       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:06:23.236971       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:06:23.240293       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 19:06:23.240419       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:06:23.240468       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 19:06:23.240822       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:06:23.244333       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:06:23.259651       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:06:23.260711       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:06:23.261780       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:06:23.266007       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 19:06:23.301407       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:06:23.304606       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:06:23.304624       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:06:23.304633       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [fc94c2af292e8519ea3b653fe782942ee97859ab9d84daa978616889c9dbc9a4] <==
	I1017 19:05:38.865079       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:05:38.865127       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:05:38.865692       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 19:05:38.866810       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 19:05:38.866933       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 19:05:38.867828       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:05:38.870292       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:05:38.872624       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 19:05:38.875535       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:05:38.876796       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:05:38.876848       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:05:38.876860       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:05:38.876867       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:05:38.877494       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:05:38.881278       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:05:38.882328       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 19:05:38.885515       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 19:05:38.905748       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 19:05:38.905831       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 19:05:38.905862       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 19:05:38.905878       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 19:05:38.905884       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 19:05:38.909405       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:05:38.910872       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:05:38.918263       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [1a6ec111b937e3f161029762a113cd7677c29f969b82bbd39178c15c91fa35ff] <==
	I1017 19:05:30.870592       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:05:32.215935       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:05:36.003458       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:05:36.003493       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:05:36.003571       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:05:36.297263       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:05:36.297328       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:05:36.344673       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:05:36.344996       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:05:36.345012       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:05:36.346424       1 config.go:200] "Starting service config controller"
	I1017 19:05:36.346434       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:05:36.346454       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:05:36.346458       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:05:36.346468       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:05:36.346472       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:05:36.347076       1 config.go:309] "Starting node config controller"
	I1017 19:05:36.347083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:05:36.347090       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:05:36.449131       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:05:36.449161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:05:36.449198       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [eff3e96e64e212df3ea0e9792e40f22d31c4c4a4f94dc3e527e418a046e42464] <==
	I1017 19:06:21.186443       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:06:21.274311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:06:21.374959       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:06:21.375008       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:06:21.375077       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:06:21.398760       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:06:21.398873       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:06:21.409253       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:06:21.409628       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:06:21.409848       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:06:21.411160       1 config.go:200] "Starting service config controller"
	I1017 19:06:21.411241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:06:21.411286       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:06:21.411337       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:06:21.411375       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:06:21.411426       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:06:21.412197       1 config.go:309] "Starting node config controller"
	I1017 19:06:21.412263       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:06:21.412296       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:06:21.511417       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:06:21.511431       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:06:21.511473       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b5cd1b23acfdb80bed81471b9d03a039316a838e69dd1997c7cb2877ecd27cb0] <==
	I1017 19:06:18.295302       1 serving.go:386] Generated self-signed cert in-memory
	W1017 19:06:19.768954       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:06:19.769079       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1017 19:06:19.769118       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:06:19.769148       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:06:19.808696       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:06:19.808806       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:06:19.811180       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:06:19.811292       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:06:19.811614       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:06:19.811706       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:06:19.911976       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ca632b64de347326fa6c0ae3e514ef7c67769609246133c842af12c520c96b76] <==
	I1017 19:05:33.162523       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:05:38.415041       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:05:38.415068       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:05:38.419893       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 19:05:38.420037       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 19:05:38.420115       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:05:38.420149       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:05:38.420190       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 19:05:38.420220       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 19:05:38.421402       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:05:38.421512       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:05:38.520792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 19:05:38.520804       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 19:05:38.520832       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:05:58.873465       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1017 19:05:58.873487       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1017 19:05:58.873506       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1017 19:05:58.873549       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:05:58.873570       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1017 19:05:58.873587       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 19:05:58.873853       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1017 19:05:58.873879       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 17 19:14:22 functional-998954 kubelet[3824]: E1017 19:14:22.707319    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:14:32 functional-998954 kubelet[3824]: E1017 19:14:32.707376    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:14:36 functional-998954 kubelet[3824]: E1017 19:14:36.707340    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:14:43 functional-998954 kubelet[3824]: E1017 19:14:43.709423    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:14:48 functional-998954 kubelet[3824]: E1017 19:14:48.707934    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:14:55 functional-998954 kubelet[3824]: E1017 19:14:55.709444    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:15:01 functional-998954 kubelet[3824]: E1017 19:15:01.708055    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:15:06 functional-998954 kubelet[3824]: E1017 19:15:06.707398    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:15:13 functional-998954 kubelet[3824]: E1017 19:15:13.707564    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:15:21 functional-998954 kubelet[3824]: E1017 19:15:21.707639    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:15:26 functional-998954 kubelet[3824]: E1017 19:15:26.707732    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:15:35 functional-998954 kubelet[3824]: E1017 19:15:35.708397    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:15:40 functional-998954 kubelet[3824]: E1017 19:15:40.707779    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:15:48 functional-998954 kubelet[3824]: E1017 19:15:48.707396    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:15:51 functional-998954 kubelet[3824]: E1017 19:15:51.708026    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:16:01 functional-998954 kubelet[3824]: E1017 19:16:01.707776    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:16:05 functional-998954 kubelet[3824]: E1017 19:16:05.708240    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:16:13 functional-998954 kubelet[3824]: E1017 19:16:13.707729    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:16:19 functional-998954 kubelet[3824]: E1017 19:16:19.708092    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:16:28 functional-998954 kubelet[3824]: E1017 19:16:28.707942    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:16:30 functional-998954 kubelet[3824]: E1017 19:16:30.708027    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:16:40 functional-998954 kubelet[3824]: E1017 19:16:40.707904    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:16:42 functional-998954 kubelet[3824]: E1017 19:16:42.707443    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	Oct 17 19:16:51 functional-998954 kubelet[3824]: E1017 19:16:51.708583    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ldl88" podUID="1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa"
	Oct 17 19:16:57 functional-998954 kubelet[3824]: E1017 19:16:57.707823    3824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-8w4tw" podUID="9d5dbf59-f69f-440d-a57a-8843ec8ee49b"
	
	
	==> storage-provisioner [4ec1aa754745bca1210e329e907d27492dcdbb8cae0a473f5869aa1b33389ddc] <==
	W1017 19:16:35.189676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:37.192555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:37.198893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:39.201506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:39.205826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:41.208481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:41.212828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:43.215540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:43.219685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:45.224555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:45.230273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:47.233685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:47.240379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:49.243654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:49.248070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:51.250865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:51.255057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:53.257626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:53.262116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:55.265141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:55.271962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:57.276254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:57.282949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:59.285810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:16:59.292904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [60962638bee7a5ed21b2bbb1b72f0d35990e700faead5731457c5875369441ce] <==
	I1017 19:05:31.289153       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 19:05:35.848926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 19:05:35.848980       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 19:05:35.982697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:05:39.439853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:05:43.700055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:05:47.297976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:05:50.351594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:05:53.374352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:05:53.381654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:05:53.381813       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 19:05:53.381995       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-998954_293d2247-6122-48b5-8427-9e481edcfbea!
	I1017 19:05:53.382805       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"925c302d-3757-4355-9417-08df19bbe73c", APIVersion:"v1", ResourceVersion:"529", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-998954_293d2247-6122-48b5-8427-9e481edcfbea became leader
	W1017 19:05:53.386164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:05:53.396371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 19:05:53.482211       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-998954_293d2247-6122-48b5-8427-9e481edcfbea!
	W1017 19:05:55.403837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:05:55.408958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:05:57.412137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 19:05:57.416327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-998954 -n functional-998954
helpers_test.go:269: (dbg) Run:  kubectl --context functional-998954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-ldl88 hello-node-connect-7d85dfc575-8w4tw
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-998954 describe pod hello-node-75c85bcc94-ldl88 hello-node-connect-7d85dfc575-8w4tw
helpers_test.go:290: (dbg) kubectl --context functional-998954 describe pod hello-node-75c85bcc94-ldl88 hello-node-connect-7d85dfc575-8w4tw:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-ldl88
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-998954/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 19:07:14 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tvfq6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tvfq6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m46s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-ldl88 to functional-998954
	  Normal   Pulling    6m52s (x5 over 9m46s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m52s (x5 over 9m46s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m52s (x5 over 9m46s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m42s (x20 over 9m46s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m30s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-8w4tw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-998954/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 19:06:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-prrbl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-prrbl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-8w4tw to functional-998954
	  Normal   Pulling    7m7s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m7s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    5m (x21 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5m (x21 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image load --daemon kicbase/echo-server:functional-998954 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-998954" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image load --daemon kicbase/echo-server:functional-998954 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-998954" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-998954
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image load --daemon kicbase/echo-server:functional-998954 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 image load --daemon kicbase/echo-server:functional-998954 --alsologtostderr: (1.071415012s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-998954" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image save kicbase/echo-server:functional-998954 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1017 19:06:46.993023  283295 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:06:46.995939  283295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:46.995960  283295 out.go:374] Setting ErrFile to fd 2...
	I1017 19:06:46.995967  283295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:46.996253  283295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:06:46.997245  283295 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:46.997379  283295 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:46.997828  283295 cli_runner.go:164] Run: docker container inspect functional-998954 --format={{.State.Status}}
	I1017 19:06:47.026362  283295 ssh_runner.go:195] Run: systemctl --version
	I1017 19:06:47.026420  283295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-998954
	I1017 19:06:47.053394  283295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/functional-998954/id_rsa Username:docker}
	I1017 19:06:47.163505  283295 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1017 19:06:47.163577  283295 cache_images.go:254] Failed to load cached images for "functional-998954": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1017 19:06:47.163601  283295 cache_images.go:266] failed pushing to: functional-998954

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-998954
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image save --daemon kicbase/echo-server:functional-998954 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-998954
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-998954: exit status 1 (20.770964ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-998954

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-998954

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-998954 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-998954 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-ldl88" [1a27ee3d-7314-4e1e-87e4-7dbaf35d08fa] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1017 19:07:19.988240  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:09:36.126143  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:10:03.830051  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:14:36.125162  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-998954 -n functional-998954
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-17 19:17:14.875311684 +0000 UTC m=+1244.044761742
functional_test.go:1460: (dbg) Run:  kubectl --context functional-998954 describe po hello-node-75c85bcc94-ldl88 -n default
functional_test.go:1460: (dbg) kubectl --context functional-998954 describe po hello-node-75c85bcc94-ldl88 -n default:
Name:             hello-node-75c85bcc94-ldl88
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-998954/192.168.49.2
Start Time:       Fri, 17 Oct 2025 19:07:14 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tvfq6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tvfq6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-ldl88 to functional-998954
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-998954 logs hello-node-75c85bcc94-ldl88 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-998954 logs hello-node-75c85bcc94-ldl88 -n default: exit status 1 (100.061695ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-ldl88" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-998954 logs hello-node-75c85bcc94-ldl88 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 service --namespace=default --https --url hello-node: exit status 115 (472.396374ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31293
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-998954 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 service hello-node --url --format={{.IP}}: exit status 115 (462.66481ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-998954 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 service hello-node --url: exit status 115 (483.574702ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31293
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-998954 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31293
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (514.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 stop --alsologtostderr -v 5
E1017 19:23:10.170388  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 stop --alsologtostderr -v 5: (37.714644182s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 start --wait true --alsologtostderr -v 5
E1017 19:24:32.092656  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:24:36.128672  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:26:48.232694  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:27:15.934736  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:29:36.125751  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-254035 start --wait true --alsologtostderr -v 5: exit status 105 (7m51.344096948s)

                                                
                                                
-- stdout --
	* [ha-254035] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-254035" primary control-plane node in "ha-254035" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-254035-m02" control-plane node in "ha-254035" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:23:44.078300  306747 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:23:44.078421  306747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:23:44.078432  306747 out.go:374] Setting ErrFile to fd 2...
	I1017 19:23:44.078438  306747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:23:44.078707  306747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:23:44.079081  306747 out.go:368] Setting JSON to false
	I1017 19:23:44.079937  306747 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7575,"bootTime":1760721449,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:23:44.080008  306747 start.go:141] virtualization:  
	I1017 19:23:44.083220  306747 out.go:179] * [ha-254035] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:23:44.087049  306747 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:23:44.087156  306747 notify.go:220] Checking for updates...
	I1017 19:23:44.093223  306747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:23:44.096040  306747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:44.098900  306747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:23:44.101720  306747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:23:44.104684  306747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:23:44.108337  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:44.108506  306747 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:23:44.135326  306747 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:23:44.135444  306747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:23:44.192131  306747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:23:44.183230595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:23:44.192236  306747 docker.go:318] overlay module found
	I1017 19:23:44.195310  306747 out.go:179] * Using the docker driver based on existing profile
	I1017 19:23:44.198085  306747 start.go:305] selected driver: docker
	I1017 19:23:44.198103  306747 start.go:925] validating driver "docker" against &{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:44.198244  306747 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:23:44.198355  306747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:23:44.253333  306747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:23:44.243935529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:23:44.253792  306747 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:23:44.253819  306747 cni.go:84] Creating CNI manager for ""
	I1017 19:23:44.253877  306747 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:23:44.253928  306747 start.go:349] cluster config:
	{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:44.258934  306747 out.go:179] * Starting "ha-254035" primary control-plane node in "ha-254035" cluster
	I1017 19:23:44.261731  306747 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:23:44.264643  306747 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:23:44.267316  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:44.267375  306747 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 19:23:44.267392  306747 cache.go:58] Caching tarball of preloaded images
	I1017 19:23:44.267402  306747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:23:44.267494  306747 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:23:44.267505  306747 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:23:44.267648  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:44.287307  306747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:23:44.287328  306747 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:23:44.287345  306747 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:23:44.287367  306747 start.go:360] acquireMachinesLock for ha-254035: {Name:mka2e39989b9cf6078778e7f6519885462ea711f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:23:44.287430  306747 start.go:364] duration metric: took 44.061µs to acquireMachinesLock for "ha-254035"
	I1017 19:23:44.287455  306747 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:23:44.287461  306747 fix.go:54] fixHost starting: 
	I1017 19:23:44.287734  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:23:44.304208  306747 fix.go:112] recreateIfNeeded on ha-254035: state=Stopped err=<nil>
	W1017 19:23:44.304236  306747 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:23:44.307544  306747 out.go:252] * Restarting existing docker container for "ha-254035" ...
	I1017 19:23:44.307642  306747 cli_runner.go:164] Run: docker start ha-254035
	I1017 19:23:44.557261  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:23:44.582382  306747 kic.go:430] container "ha-254035" state is running.
	I1017 19:23:44.582813  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:44.609625  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:44.609882  306747 machine.go:93] provisionDockerMachine start ...
	I1017 19:23:44.609944  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:44.630467  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:44.634045  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:44.634070  306747 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:23:44.634815  306747 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:23:47.792030  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:23:47.792065  306747 ubuntu.go:182] provisioning hostname "ha-254035"
	I1017 19:23:47.792127  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:47.809622  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:47.809936  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:47.809952  306747 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035 && echo "ha-254035" | sudo tee /etc/hostname
	I1017 19:23:47.965159  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:23:47.965243  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:47.983936  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:47.984247  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:47.984262  306747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:23:48.140890  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:23:48.140965  306747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:23:48.140998  306747 ubuntu.go:190] setting up certificates
	I1017 19:23:48.141008  306747 provision.go:84] configureAuth start
	I1017 19:23:48.141069  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:48.158600  306747 provision.go:143] copyHostCerts
	I1017 19:23:48.158645  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:48.158680  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:23:48.158692  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:48.158773  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:23:48.158860  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:48.158883  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:23:48.158892  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:48.158921  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:23:48.158969  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:48.158990  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:23:48.158998  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:48.159024  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:23:48.159076  306747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035 san=[127.0.0.1 192.168.49.2 ha-254035 localhost minikube]
	I1017 19:23:49.196726  306747 provision.go:177] copyRemoteCerts
	I1017 19:23:49.196790  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:23:49.196831  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.213909  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.316345  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:23:49.316405  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:23:49.333689  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:23:49.333750  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1017 19:23:49.350869  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:23:49.350938  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:23:49.369234  306747 provision.go:87] duration metric: took 1.228212253s to configureAuth
	I1017 19:23:49.369303  306747 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:23:49.369552  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:49.369665  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.386704  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:49.387020  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:49.387042  306747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:23:49.707607  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:23:49.707692  306747 machine.go:96] duration metric: took 5.097783711s to provisionDockerMachine
	I1017 19:23:49.707720  306747 start.go:293] postStartSetup for "ha-254035" (driver="docker")
	I1017 19:23:49.707762  306747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:23:49.707871  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:23:49.707943  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.732798  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.836574  306747 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:23:49.839984  306747 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:23:49.840010  306747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:23:49.840021  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:23:49.840085  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:23:49.840181  306747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:23:49.840196  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:23:49.840298  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:23:49.847846  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:49.865445  306747 start.go:296] duration metric: took 157.679358ms for postStartSetup
	I1017 19:23:49.865569  306747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:23:49.865624  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.889188  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.989662  306747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:23:49.994825  306747 fix.go:56] duration metric: took 5.707355296s for fixHost
	I1017 19:23:49.994852  306747 start.go:83] releasing machines lock for "ha-254035", held for 5.707408965s
	I1017 19:23:49.994927  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:50.015297  306747 ssh_runner.go:195] Run: cat /version.json
	I1017 19:23:50.015360  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:50.015301  306747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:23:50.015521  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:50.036378  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:50.050179  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:50.238257  306747 ssh_runner.go:195] Run: systemctl --version
	I1017 19:23:50.244735  306747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:23:50.281650  306747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:23:50.286151  306747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:23:50.286279  306747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:23:50.294085  306747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:23:50.294116  306747 start.go:495] detecting cgroup driver to use...
	I1017 19:23:50.294156  306747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:23:50.294238  306747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:23:50.309600  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:23:50.322860  306747 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:23:50.322932  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:23:50.338234  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:23:50.351355  306747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:23:50.467572  306747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:23:50.583217  306747 docker.go:234] disabling docker service ...
	I1017 19:23:50.583338  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:23:50.598924  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:23:50.611975  306747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:23:50.724286  306747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:23:50.847044  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:23:50.859364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:23:50.873503  306747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:23:50.873573  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.882985  306747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:23:50.883056  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.892747  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.902591  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.911060  306747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:23:50.919007  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.928031  306747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.936934  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.945620  306747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:23:50.953208  306747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:23:50.960459  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:23:51.085184  306747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:23:51.215570  306747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:23:51.215643  306747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:23:51.219416  306747 start.go:563] Will wait 60s for crictl version
	I1017 19:23:51.219481  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:23:51.222932  306747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:23:51.247803  306747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:23:51.247951  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:23:51.276815  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:23:51.309138  306747 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:23:51.311805  306747 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:23:51.327519  306747 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:23:51.331666  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:23:51.341689  306747 kubeadm.go:883] updating cluster {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:23:51.341851  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:51.341916  306747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:23:51.379317  306747 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:23:51.379341  306747 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:23:51.379396  306747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:23:51.405884  306747 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:23:51.405906  306747 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:23:51.405918  306747 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:23:51.406057  306747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:23:51.406155  306747 ssh_runner.go:195] Run: crio config
	I1017 19:23:51.475467  306747 cni.go:84] Creating CNI manager for ""
	I1017 19:23:51.475497  306747 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:23:51.475520  306747 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:23:51.475544  306747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-254035 NodeName:ha-254035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:23:51.475670  306747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-254035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:23:51.475693  306747 kube-vip.go:115] generating kube-vip config ...
	I1017 19:23:51.475756  306747 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:23:51.487989  306747 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:23:51.488119  306747 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:23:51.488198  306747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:23:51.496044  306747 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:23:51.496117  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 19:23:51.503891  306747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 19:23:51.517028  306747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:23:51.530699  306747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 19:23:51.544563  306747 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:23:51.557994  306747 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:23:51.561600  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:23:51.571313  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:23:51.690597  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:23:51.707379  306747 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.2
	I1017 19:23:51.707451  306747 certs.go:195] generating shared ca certs ...
	I1017 19:23:51.707483  306747 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:51.707678  306747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:23:51.707765  306747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:23:51.707807  306747 certs.go:257] generating profile certs ...
	I1017 19:23:51.707925  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:23:51.707978  306747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea
	I1017 19:23:51.708011  306747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1017 19:23:52.143690  306747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea ...
	I1017 19:23:52.143724  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea: {Name:mk84072e95c642d9de97a7b2d7684c1b2411f2c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:52.143929  306747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea ...
	I1017 19:23:52.143944  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea: {Name:mk1e13a21ca5f9f77c2e8e2d4f37d2c902696b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:52.144031  306747 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt
	I1017 19:23:52.144173  306747 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key
	I1017 19:23:52.144307  306747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:23:52.144326  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:23:52.144342  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:23:52.144362  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:23:52.144377  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:23:52.144396  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:23:52.144419  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:23:52.144435  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:23:52.144450  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:23:52.144501  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:23:52.144555  306747 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:23:52.144570  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:23:52.144594  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:23:52.144621  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:23:52.144646  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:23:52.144696  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:52.144726  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.144744  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.144760  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.145349  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:23:52.164836  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:23:52.182173  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:23:52.200320  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:23:52.220031  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:23:52.239993  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:23:52.259787  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:23:52.278396  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:23:52.296286  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:23:52.313979  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:23:52.331810  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:23:52.349798  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:23:52.364237  306747 ssh_runner.go:195] Run: openssl version
	I1017 19:23:52.376391  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:23:52.385410  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.389746  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.389837  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.434948  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:23:52.443397  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:23:52.452268  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.460529  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.460626  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.518909  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:23:52.528730  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:23:52.541129  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.545573  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.545658  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.629233  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:23:52.650967  306747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:23:52.657469  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:23:52.741430  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:23:52.801484  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:23:52.855613  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:23:52.911294  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:23:52.960715  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:23:53.023389  306747 kubeadm.go:400] StartCluster: {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:53.023526  306747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:23:53.023593  306747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:23:53.070982  306747 cri.go:89] found id: "a9f69dd8228df806b3caf0a6a77814b3035f6624474afd789ff17d36b93becbb"
	I1017 19:23:53.071006  306747 cri.go:89] found id: "2dc181e1d75c199e1d878c25f6b4eb381f5134e5e8ff6ed9deea02322d7cdf4c"
	I1017 19:23:53.071011  306747 cri.go:89] found id: "6fb4bcbcf5815899f9ed7e0ee3f40ae912c24131eda2482a13e66f3bf9211953"
	I1017 19:23:53.071015  306747 cri.go:89] found id: "99ffff8c4838d302fd86aa2def104fc0bc5a061a4b4b00a66b6659be26e84f94"
	I1017 19:23:53.071018  306747 cri.go:89] found id: "b745cb636fe8e12797dbad3808d1af04aa579d4fbd2ba8ac91052e88e1d9594d"
	I1017 19:23:53.071022  306747 cri.go:89] found id: ""
	I1017 19:23:53.071070  306747 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:23:53.085921  306747 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:23:53Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:23:53.085995  306747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:23:53.099392  306747 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:23:53.099418  306747 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:23:53.099471  306747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:23:53.118282  306747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:23:53.118709  306747 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-254035" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:53.118820  306747 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-254035" cluster setting kubeconfig missing "ha-254035" context setting]
	I1017 19:23:53.119084  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.119598  306747 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:23:53.120104  306747 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 19:23:53.120124  306747 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 19:23:53.120130  306747 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 19:23:53.120135  306747 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 19:23:53.120142  306747 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 19:23:53.120434  306747 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 19:23:53.120753  306747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:23:53.137306  306747 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 19:23:53.137333  306747 kubeadm.go:601] duration metric: took 37.90723ms to restartPrimaryControlPlane
	I1017 19:23:53.137344  306747 kubeadm.go:402] duration metric: took 113.964982ms to StartCluster
	I1017 19:23:53.137360  306747 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.137421  306747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:53.137983  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.138193  306747 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:23:53.138219  306747 start.go:241] waiting for startup goroutines ...
	I1017 19:23:53.138228  306747 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:23:53.138643  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:53.142436  306747 out.go:179] * Enabled addons: 
	I1017 19:23:53.145409  306747 addons.go:514] duration metric: took 7.175068ms for enable addons: enabled=[]
	I1017 19:23:53.145452  306747 start.go:246] waiting for cluster config update ...
	I1017 19:23:53.145461  306747 start.go:255] writing updated cluster config ...
	I1017 19:23:53.148803  306747 out.go:203] 
	I1017 19:23:53.151893  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:53.152042  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.155214  306747 out.go:179] * Starting "ha-254035-m02" control-plane node in "ha-254035" cluster
	I1017 19:23:53.158764  306747 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:23:53.161709  306747 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:23:53.164610  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:53.164638  306747 cache.go:58] Caching tarball of preloaded images
	I1017 19:23:53.164743  306747 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:23:53.164758  306747 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:23:53.164894  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.165099  306747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:23:53.194887  306747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:23:53.194907  306747 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:23:53.194919  306747 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:23:53.194954  306747 start.go:360] acquireMachinesLock for ha-254035-m02: {Name:mkcf59557cfb2c18712510006a9b88f53e9d8916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:23:53.195003  306747 start.go:364] duration metric: took 34.034µs to acquireMachinesLock for "ha-254035-m02"
	I1017 19:23:53.195021  306747 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:23:53.195027  306747 fix.go:54] fixHost starting: m02
	I1017 19:23:53.195286  306747 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:23:53.230172  306747 fix.go:112] recreateIfNeeded on ha-254035-m02: state=Stopped err=<nil>
	W1017 19:23:53.230198  306747 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:23:53.233425  306747 out.go:252] * Restarting existing docker container for "ha-254035-m02" ...
	I1017 19:23:53.233506  306747 cli_runner.go:164] Run: docker start ha-254035-m02
	I1017 19:23:53.677194  306747 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:23:53.705353  306747 kic.go:430] container "ha-254035-m02" state is running.
	I1017 19:23:53.705741  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:23:53.741365  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.741612  306747 machine.go:93] provisionDockerMachine start ...
	I1017 19:23:53.741677  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:53.774362  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:53.774683  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:53.774700  306747 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:23:53.776617  306747 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32782->127.0.0.1:33179: read: connection reset by peer
	I1017 19:23:57.101345  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:23:57.101367  306747 ubuntu.go:182] provisioning hostname "ha-254035-m02"
	I1017 19:23:57.101452  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:57.129925  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:57.130248  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:57.130260  306747 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m02 && echo "ha-254035-m02" | sudo tee /etc/hostname
	I1017 19:23:57.485252  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:23:57.485332  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:57.518218  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:57.518523  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:57.518547  306747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:23:57.769807  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:23:57.769837  306747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:23:57.769852  306747 ubuntu.go:190] setting up certificates
	I1017 19:23:57.769861  306747 provision.go:84] configureAuth start
	I1017 19:23:57.769925  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:23:57.808507  306747 provision.go:143] copyHostCerts
	I1017 19:23:57.808576  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:57.808611  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:23:57.808621  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:57.808702  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:23:57.808777  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:57.808795  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:23:57.808799  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:57.808824  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:23:57.808885  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:57.808900  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:23:57.808904  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:57.808927  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:23:57.808973  306747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m02 san=[127.0.0.1 192.168.49.3 ha-254035-m02 localhost minikube]
	I1017 19:23:58.970392  306747 provision.go:177] copyRemoteCerts
	I1017 19:23:58.970466  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:23:58.970517  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:58.988411  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:23:59.109264  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:23:59.109327  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:23:59.143927  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:23:59.144007  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:23:59.175735  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:23:59.175798  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:23:59.207513  306747 provision.go:87] duration metric: took 1.437637997s to configureAuth
	I1017 19:23:59.207541  306747 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:23:59.207787  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:59.207891  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.254211  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:59.254534  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:59.254554  306747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:23:59.802396  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:23:59.802506  306747 machine.go:96] duration metric: took 6.06086173s to provisionDockerMachine
	I1017 19:23:59.802537  306747 start.go:293] postStartSetup for "ha-254035-m02" (driver="docker")
	I1017 19:23:59.802584  306747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:23:59.802692  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:23:59.802768  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.826274  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:23:59.933472  306747 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:23:59.937860  306747 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:23:59.937890  306747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:23:59.937902  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:23:59.937957  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:23:59.938045  306747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:23:59.938058  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:23:59.938173  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:23:59.946632  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:59.974586  306747 start.go:296] duration metric: took 172.005858ms for postStartSetup
	I1017 19:23:59.974693  306747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:23:59.974736  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.998482  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:00.178671  306747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:24:00.215855  306747 fix.go:56] duration metric: took 7.020817171s for fixHost
	I1017 19:24:00.215889  306747 start.go:83] releasing machines lock for "ha-254035-m02", held for 7.020877911s
	I1017 19:24:00.215976  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:24:00.366887  306747 out.go:179] * Found network options:
	I1017 19:24:00.370345  306747 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 19:24:00.373400  306747 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:24:00.373520  306747 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:24:00.373638  306747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:24:00.373712  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:24:00.373921  306747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:24:00.373955  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:24:00.473797  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:00.502501  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:01.163570  306747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:24:01.201188  306747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:24:01.201285  306747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:24:01.221545  306747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:24:01.221578  306747 start.go:495] detecting cgroup driver to use...
	I1017 19:24:01.221624  306747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:24:01.221679  306747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:24:01.249432  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:24:01.274115  306747 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:24:01.274197  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:24:01.300156  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:24:01.327634  306747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:24:01.676293  306747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:24:01.963473  306747 docker.go:234] disabling docker service ...
	I1017 19:24:01.963548  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:24:01.985469  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:24:02.006761  306747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:24:02.326335  306747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:24:02.689696  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:24:02.707153  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:24:02.733380  306747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:24:02.733503  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.745270  306747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:24:02.745354  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.761212  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.777017  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.786654  306747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:24:02.797775  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.809053  306747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.819042  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.830450  306747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:24:02.839137  306747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:24:02.853061  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:24:03.081615  306747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:25:33.444575  306747 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.36287356s)
	I1017 19:25:33.444601  306747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:25:33.444663  306747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:25:33.448790  306747 start.go:563] Will wait 60s for crictl version
	I1017 19:25:33.448855  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:25:33.452484  306747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:25:33.483181  306747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:25:33.483261  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:25:33.520275  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:25:33.555708  306747 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:25:33.558710  306747 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:25:33.561569  306747 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:25:33.577269  306747 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:25:33.581166  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:25:33.590512  306747 mustload.go:65] Loading cluster: ha-254035
	I1017 19:25:33.590749  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:25:33.591003  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:25:33.607631  306747 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:25:33.607910  306747 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.3
	I1017 19:25:33.607918  306747 certs.go:195] generating shared ca certs ...
	I1017 19:25:33.607932  306747 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:25:33.608031  306747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:25:33.608069  306747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:25:33.608076  306747 certs.go:257] generating profile certs ...
	I1017 19:25:33.608151  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:25:33.608210  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.5a836dc6
	I1017 19:25:33.608248  306747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:25:33.608256  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:25:33.608268  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:25:33.608278  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:25:33.608288  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:25:33.608298  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:25:33.608314  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:25:33.608325  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:25:33.608334  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:25:33.608382  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:25:33.608409  306747 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:25:33.608418  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:25:33.608439  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:25:33.608460  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:25:33.608482  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:25:33.608557  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:25:33.608586  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:25:33.608606  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:33.608635  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:25:33.608691  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:25:33.626221  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:25:33.720799  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:25:33.724641  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:25:33.732808  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:25:33.736200  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:25:33.744126  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:25:33.747465  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:25:33.755494  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:25:33.759075  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:25:33.767011  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:25:33.770516  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:25:33.778582  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:25:33.781925  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:25:33.789662  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:25:33.814144  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:25:33.834289  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:25:33.855264  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:25:33.875243  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:25:33.892238  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:25:33.909902  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:25:33.927819  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:25:33.945089  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:25:33.970864  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:25:33.990984  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:25:34.011449  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:25:34.027436  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:25:34.042890  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:25:34.058368  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:25:34.072057  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:25:34.088147  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:25:34.104554  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:25:34.119006  306747 ssh_runner.go:195] Run: openssl version
	I1017 19:25:34.125500  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:25:34.134066  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.138184  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.138272  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.179366  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:25:34.187225  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:25:34.195194  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.198812  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.198884  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.240748  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:25:34.248576  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:25:34.256442  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.260252  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.260343  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.301741  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:25:34.309494  306747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:25:34.313266  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:25:34.354021  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:25:34.403496  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:25:34.452995  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:25:34.501920  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:25:34.553096  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:25:34.605637  306747 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 19:25:34.605735  306747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:25:34.605768  306747 kube-vip.go:115] generating kube-vip config ...
	I1017 19:25:34.605818  306747 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:25:34.618260  306747 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:25:34.618384  306747 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:25:34.618473  306747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:25:34.626096  306747 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:25:34.626222  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:25:34.634241  306747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:25:34.648042  306747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:25:34.661462  306747 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:25:34.676617  306747 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:25:34.680227  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:25:34.690889  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:25:34.816737  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:25:34.831088  306747 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:25:34.831560  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:25:34.834934  306747 out.go:179] * Verifying Kubernetes components...
	I1017 19:25:34.837819  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:25:34.968993  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:25:34.983274  306747 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:25:34.983348  306747 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:25:34.983632  306747 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:25:40.996755  306747 node_ready.go:49] node "ha-254035-m02" is "Ready"
	I1017 19:25:40.996789  306747 node_ready.go:38] duration metric: took 6.013138239s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:25:40.996811  306747 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:25:40.996889  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:41.497684  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:41.997836  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:42.497138  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:42.997736  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:43.497602  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:43.997356  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:44.497754  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:44.997290  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:45.497281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:45.997333  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:46.497704  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:46.997128  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:47.497723  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:47.997671  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:48.497561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:48.997733  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:49.497782  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:49.997750  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:50.497774  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:50.997177  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:51.497562  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:51.997821  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:52.497764  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:52.997863  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:53.497099  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:53.997052  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:54.497663  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:54.997664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:55.497701  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:55.997019  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:56.497726  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:56.997168  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:57.497752  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:57.997835  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:58.497010  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:58.997743  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:59.497316  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:59.997012  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:00.497061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:00.997884  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:01.497722  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:01.997039  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:02.497739  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:02.997315  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:03.497590  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:03.997754  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:04.497035  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:04.997744  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:05.497624  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:05.997419  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:06.497061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:06.997596  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:07.497373  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:07.997733  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:08.497364  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:08.997732  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:09.497421  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:09.997728  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:10.497717  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:10.996987  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:11.497090  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:11.996943  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:12.497429  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:12.997010  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:13.496953  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:13.997093  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:14.497074  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:14.997281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:15.497737  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:15.997688  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:16.497625  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:16.997704  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:17.497320  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:17.996949  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:18.497953  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:18.997042  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:19.497090  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:19.997041  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:20.497518  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:20.997019  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:21.497012  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:21.996982  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:22.497045  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:22.997657  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:23.497467  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:23.997803  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:24.497044  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:24.997325  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:25.497747  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:25.997044  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:26.497026  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:26.997552  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:27.497036  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:27.997604  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:28.497701  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:28.997373  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:29.497563  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:29.997697  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:30.497017  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:30.997407  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:31.497716  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:31.997874  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:32.497096  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:32.997561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:33.497057  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:33.997665  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:34.497043  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:34.997691  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:34.997800  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:35.032363  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:35.032386  306747 cri.go:89] found id: ""
	I1017 19:26:35.032399  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:35.032460  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.036381  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:35.036459  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:35.065338  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:35.065359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:35.065364  306747 cri.go:89] found id: ""
	I1017 19:26:35.065371  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:35.065425  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.069065  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.072703  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:35.072774  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:35.103898  306747 cri.go:89] found id: ""
	I1017 19:26:35.103925  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.103934  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:35.103941  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:35.104009  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:35.133147  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:35.133171  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:35.133176  306747 cri.go:89] found id: ""
	I1017 19:26:35.133189  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:35.133243  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.137074  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.140598  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:35.140672  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:35.172805  306747 cri.go:89] found id: ""
	I1017 19:26:35.172831  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.172840  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:35.172847  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:35.172921  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:35.200314  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:35.200339  306747 cri.go:89] found id: ""
	I1017 19:26:35.200347  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:35.200399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.204068  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:35.204142  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:35.229333  306747 cri.go:89] found id: ""
	I1017 19:26:35.229355  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.229364  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:35.229373  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:35.229386  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:35.270788  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:35.270824  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:35.327408  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:35.327441  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:35.407924  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:35.407963  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:35.511553  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:35.511590  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:35.532712  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:35.532742  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:35.560601  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:35.560631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:35.605951  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:35.605984  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:35.637220  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:35.637251  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:35.667818  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:35.667848  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:35.697952  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:35.697980  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:36.107033  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:36.098521    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.099526    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.100351    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.101907    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.102306    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:36.098521    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.099526    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.100351    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.101907    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.102306    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:38.608691  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:38.620441  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:38.620597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:38.653949  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:38.653982  306747 cri.go:89] found id: ""
	I1017 19:26:38.653991  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:38.654045  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.657661  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:38.657779  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:38.682961  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:38.682992  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:38.682998  306747 cri.go:89] found id: ""
	I1017 19:26:38.683005  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:38.683057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.686897  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.690246  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:38.690316  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:38.727058  306747 cri.go:89] found id: ""
	I1017 19:26:38.727088  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.727096  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:38.727102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:38.727159  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:38.751866  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:38.751891  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:38.751895  306747 cri.go:89] found id: ""
	I1017 19:26:38.751902  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:38.751960  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.755561  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.758764  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:38.758835  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:38.791573  306747 cri.go:89] found id: ""
	I1017 19:26:38.791597  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.791607  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:38.791613  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:38.791672  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:38.818970  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:38.818993  306747 cri.go:89] found id: ""
	I1017 19:26:38.819002  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:38.819054  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.822644  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:38.822766  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:38.849350  306747 cri.go:89] found id: ""
	I1017 19:26:38.849373  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.849381  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:38.849390  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:38.849436  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:38.883482  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:38.883512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:38.978629  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:38.978664  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:39.055121  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:39.045881    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.046283    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.047962    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.048507    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.050096    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:39.045881    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.046283    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.047962    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.048507    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.050096    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:39.055145  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:39.055158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:39.081488  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:39.081516  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:39.123529  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:39.123560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:39.152993  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:39.153024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:39.181581  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:39.181608  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:39.199086  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:39.199116  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:39.231605  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:39.231638  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:39.287509  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:39.287544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:41.868969  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:41.879522  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:41.879591  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:41.906366  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:41.906388  306747 cri.go:89] found id: ""
	I1017 19:26:41.906397  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:41.906450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.909979  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:41.910090  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:41.940072  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:41.940101  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:41.940105  306747 cri.go:89] found id: ""
	I1017 19:26:41.940113  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:41.940173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.945194  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.948667  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:41.948784  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:41.979374  306747 cri.go:89] found id: ""
	I1017 19:26:41.979410  306747 logs.go:282] 0 containers: []
	W1017 19:26:41.979419  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:41.979425  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:41.979492  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:42.008367  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:42.008445  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:42.008465  306747 cri.go:89] found id: ""
	I1017 19:26:42.008493  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:42.008628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.016467  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.031735  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:42.031876  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:42.079629  306747 cri.go:89] found id: ""
	I1017 19:26:42.079665  306747 logs.go:282] 0 containers: []
	W1017 19:26:42.079676  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:42.079684  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:42.079750  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:42.122316  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:42.122342  306747 cri.go:89] found id: ""
	I1017 19:26:42.122351  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:42.122423  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.131137  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:42.131241  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:42.200222  306747 cri.go:89] found id: ""
	I1017 19:26:42.200249  306747 logs.go:282] 0 containers: []
	W1017 19:26:42.200259  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:42.200270  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:42.200283  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:42.314817  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:42.314908  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:42.375712  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:42.375762  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:42.431602  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:42.431639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:42.465004  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:42.465097  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:42.491256  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:42.491284  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:42.567094  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:42.558455    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.559104    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.560757    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.561472    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.563142    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:42.558455    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.559104    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.560757    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.561472    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.563142    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:42.567120  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:42.567134  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:42.597513  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:42.597543  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:42.632231  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:42.632268  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:42.659445  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:42.659478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:42.686189  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:42.686217  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:45.285116  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:45.308457  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:45.308578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:45.374050  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:45.374075  306747 cri.go:89] found id: ""
	I1017 19:26:45.374083  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:45.374152  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.386847  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:45.387031  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:45.432081  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:45.432105  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:45.432111  306747 cri.go:89] found id: ""
	I1017 19:26:45.432129  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:45.432185  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.436568  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.443473  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:45.443575  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:45.473992  306747 cri.go:89] found id: ""
	I1017 19:26:45.474066  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.474095  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:45.474124  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:45.474279  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:45.508735  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:45.508808  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:45.508820  306747 cri.go:89] found id: ""
	I1017 19:26:45.508829  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:45.508889  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.513024  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.517047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:45.517124  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:45.544672  306747 cri.go:89] found id: ""
	I1017 19:26:45.544698  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.544707  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:45.544714  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:45.544814  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:45.577228  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:45.577250  306747 cri.go:89] found id: ""
	I1017 19:26:45.577257  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:45.577316  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.581280  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:45.581379  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:45.608143  306747 cri.go:89] found id: ""
	I1017 19:26:45.608166  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.608174  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:45.608183  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:45.608226  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:45.627200  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:45.627230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:45.699692  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:45.692149    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.692814    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694339    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694730    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.696164    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:45.692149    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.692814    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694339    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694730    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.696164    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:45.699717  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:45.699732  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:45.725239  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:45.725269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:45.766316  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:45.766359  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:45.831866  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:45.831908  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:45.869708  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:45.869736  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:45.910170  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:45.910198  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:46.010455  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:46.010498  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:46.047523  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:46.047559  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:46.076222  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:46.076306  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:48.663425  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:48.673865  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:48.673931  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:48.699244  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:48.699267  306747 cri.go:89] found id: ""
	I1017 19:26:48.699275  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:48.699330  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.702918  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:48.702988  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:48.729193  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:48.729268  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:48.729288  306747 cri.go:89] found id: ""
	I1017 19:26:48.729311  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:48.729390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.732927  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.736821  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:48.736893  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:48.763745  306747 cri.go:89] found id: ""
	I1017 19:26:48.763770  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.763780  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:48.763786  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:48.763842  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:48.790384  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:48.790407  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:48.790413  306747 cri.go:89] found id: ""
	I1017 19:26:48.790420  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:48.790496  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.796703  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.800342  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:48.800409  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:48.825802  306747 cri.go:89] found id: ""
	I1017 19:26:48.825830  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.825839  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:48.825846  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:48.825904  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:48.863208  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:48.863231  306747 cri.go:89] found id: ""
	I1017 19:26:48.863239  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:48.863294  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.866822  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:48.866902  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:48.896937  306747 cri.go:89] found id: ""
	I1017 19:26:48.897017  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.897039  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:48.897080  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:48.897109  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:48.999995  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:49.000071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:49.019541  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:49.019629  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:49.045737  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:49.045806  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:49.106443  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:49.106478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:49.135555  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:49.135583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:49.162643  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:49.162670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:49.240999  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:49.241038  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:49.311820  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:49.304505    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.305101    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.306817    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.307292    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.308350    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:49.304505    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.305101    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.306817    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.307292    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.308350    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:49.311849  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:49.311861  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:49.347575  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:49.347614  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:49.399291  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:49.399328  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:51.931612  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:51.944600  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:51.944667  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:51.977717  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:51.977741  306747 cri.go:89] found id: ""
	I1017 19:26:51.977750  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:51.977808  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:51.981757  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:51.981877  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:52.013943  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:52.013965  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:52.013971  306747 cri.go:89] found id: ""
	I1017 19:26:52.013979  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:52.014034  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.017876  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.021450  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:52.021529  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:52.054762  306747 cri.go:89] found id: ""
	I1017 19:26:52.054788  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.054797  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:52.054804  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:52.054873  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:52.094469  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:52.094492  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:52.094498  306747 cri.go:89] found id: ""
	I1017 19:26:52.094506  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:52.094561  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.099707  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.103487  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:52.103557  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:52.137366  306747 cri.go:89] found id: ""
	I1017 19:26:52.137393  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.137403  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:52.137410  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:52.137494  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:52.164118  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:52.164142  306747 cri.go:89] found id: ""
	I1017 19:26:52.164151  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:52.164235  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.167871  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:52.167951  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:52.195587  306747 cri.go:89] found id: ""
	I1017 19:26:52.195667  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.195691  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:52.195730  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:52.195759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:52.214865  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:52.214895  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:52.252677  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:52.252718  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:52.306241  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:52.306281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:52.362956  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:52.362991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:52.391628  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:52.391659  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:52.471864  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:52.463115    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464242    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464958    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.465978    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.466515    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:52.463115    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464242    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464958    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.465978    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.466515    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:52.471900  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:52.471915  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:52.518448  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:52.518483  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:52.552877  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:52.552904  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:52.635208  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:52.635241  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:52.671244  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:52.671274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:55.270940  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:55.282002  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:55.282081  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:55.307829  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:55.307853  306747 cri.go:89] found id: ""
	I1017 19:26:55.307862  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:55.307917  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.311717  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:55.311788  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:55.337747  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:55.337770  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:55.337775  306747 cri.go:89] found id: ""
	I1017 19:26:55.337783  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:55.337840  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.341583  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.345443  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:55.345519  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:55.374240  306747 cri.go:89] found id: ""
	I1017 19:26:55.374268  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.374277  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:55.374283  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:55.374348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:55.400969  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:55.400994  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:55.400999  306747 cri.go:89] found id: ""
	I1017 19:26:55.401007  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:55.401074  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.405683  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.409216  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:55.409288  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:55.436866  306747 cri.go:89] found id: ""
	I1017 19:26:55.436897  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.436907  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:55.436913  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:55.436972  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:55.469071  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:55.469094  306747 cri.go:89] found id: ""
	I1017 19:26:55.469103  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:55.469160  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.472979  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:55.473075  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:55.504006  306747 cri.go:89] found id: ""
	I1017 19:26:55.504033  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.504043  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:55.504052  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:55.504064  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:55.530026  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:55.530065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:55.566251  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:55.566281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:55.619544  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:55.619580  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:55.647120  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:55.647155  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:55.674483  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:55.674552  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:55.771290  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:55.771328  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:55.791108  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:55.791139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:55.877444  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:55.868298    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.869608    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.870496    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.871568    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.873502    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:55.868298    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.869608    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.870496    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.871568    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.873502    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:55.877467  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:55.877481  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:55.942292  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:55.942327  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:56.029233  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:56.029279  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:58.564639  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:58.575251  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:58.575327  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:58.603745  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:58.603769  306747 cri.go:89] found id: ""
	I1017 19:26:58.603778  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:58.603841  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.607600  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:58.607673  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:58.635364  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:58.635387  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:58.635393  306747 cri.go:89] found id: ""
	I1017 19:26:58.635401  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:58.635459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.639164  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.642599  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:58.642665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:58.671065  306747 cri.go:89] found id: ""
	I1017 19:26:58.671089  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.671098  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:58.671105  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:58.671161  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:58.697581  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:58.697606  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:58.697613  306747 cri.go:89] found id: ""
	I1017 19:26:58.697621  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:58.697701  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.701636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.705721  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:58.705790  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:58.739521  306747 cri.go:89] found id: ""
	I1017 19:26:58.739548  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.739557  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:58.739563  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:58.739618  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:58.766994  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:58.767022  306747 cri.go:89] found id: ""
	I1017 19:26:58.767030  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:58.767085  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.771181  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:58.771253  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:58.798835  306747 cri.go:89] found id: ""
	I1017 19:26:58.798862  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.798871  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:58.798880  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:58.798891  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:58.841984  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:58.842010  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:58.866669  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:58.866697  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:58.916756  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:58.916789  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:58.980015  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:58.980050  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:59.009380  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:59.009409  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:59.109257  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:59.109295  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:59.177549  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:59.168803    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.169600    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.171537    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.172076    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.173678    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:59.168803    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.169600    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.171537    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.172076    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.173678    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:59.177581  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:59.177599  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:59.206699  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:59.206727  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:59.242107  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:59.242142  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:59.275450  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:59.275479  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:01.857354  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:01.869639  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:01.869705  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:01.902744  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:01.902764  306747 cri.go:89] found id: ""
	I1017 19:27:01.902772  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:01.902838  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.906810  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:01.906935  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:01.934659  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:01.934722  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:01.934742  306747 cri.go:89] found id: ""
	I1017 19:27:01.934766  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:01.934853  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.938762  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.946146  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:01.946267  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:01.980395  306747 cri.go:89] found id: ""
	I1017 19:27:01.980461  306747 logs.go:282] 0 containers: []
	W1017 19:27:01.980482  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:01.980505  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:01.980614  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:02.015273  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:02.015298  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:02.015303  306747 cri.go:89] found id: ""
	I1017 19:27:02.015320  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:02.015383  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.019407  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.023456  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:02.023534  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:02.051152  306747 cri.go:89] found id: ""
	I1017 19:27:02.051182  306747 logs.go:282] 0 containers: []
	W1017 19:27:02.051192  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:02.051198  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:02.051258  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:02.080723  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:02.080745  306747 cri.go:89] found id: ""
	I1017 19:27:02.080753  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:02.080813  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.084603  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:02.084678  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:02.120072  306747 cri.go:89] found id: ""
	I1017 19:27:02.120146  306747 logs.go:282] 0 containers: []
	W1017 19:27:02.120170  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:02.120195  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:02.120230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:02.139600  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:02.139631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:02.185131  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:02.185166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:02.229909  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:02.229940  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:02.260111  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:02.260140  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:02.288588  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:02.288618  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:02.370459  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:02.370495  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:02.476572  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:02.476608  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:02.551905  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:02.543576    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.544579    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546057    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546535    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.548140    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:02.543576    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.544579    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546057    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546535    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.548140    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:02.551926  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:02.551940  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:02.578293  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:02.578321  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:02.633456  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:02.633493  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:05.164689  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:05.177240  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:05.177315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:05.205506  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:05.205530  306747 cri.go:89] found id: ""
	I1017 19:27:05.205540  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:05.205597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.209410  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:05.209492  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:05.236360  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:05.236383  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:05.236388  306747 cri.go:89] found id: ""
	I1017 19:27:05.236396  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:05.236448  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.240255  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.243840  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:05.243907  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:05.279749  306747 cri.go:89] found id: ""
	I1017 19:27:05.279788  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.279798  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:05.279804  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:05.279860  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:05.307767  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:05.307790  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:05.307796  306747 cri.go:89] found id: ""
	I1017 19:27:05.307803  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:05.307857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.311429  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.314827  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:05.314906  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:05.340148  306747 cri.go:89] found id: ""
	I1017 19:27:05.340175  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.340184  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:05.340190  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:05.340246  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:05.366040  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:05.366063  306747 cri.go:89] found id: ""
	I1017 19:27:05.366071  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:05.366145  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.369954  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:05.370054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:05.396415  306747 cri.go:89] found id: ""
	I1017 19:27:05.396439  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.396448  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:05.396457  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:05.396468  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:05.491768  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:05.491804  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:05.510133  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:05.510179  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:05.588291  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:05.580157    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.580846    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.582570    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.583481    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.584634    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:05.580157    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.580846    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.582570    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.583481    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.584634    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:05.588313  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:05.588326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:05.616894  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:05.616921  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:05.660215  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:05.660252  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:05.715621  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:05.715657  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:05.744211  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:05.744240  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:05.777510  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:05.777544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:05.808038  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:05.808066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:05.885964  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:05.886000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:08.420171  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:08.431142  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:08.431221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:08.457528  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:08.457552  306747 cri.go:89] found id: ""
	I1017 19:27:08.457561  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:08.457616  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.461556  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:08.461665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:08.492016  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:08.492039  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:08.492044  306747 cri.go:89] found id: ""
	I1017 19:27:08.492052  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:08.492103  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.495761  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.500185  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:08.500282  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:08.526916  306747 cri.go:89] found id: ""
	I1017 19:27:08.526941  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.526950  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:08.526957  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:08.527014  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:08.556113  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:08.556134  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:08.556140  306747 cri.go:89] found id: ""
	I1017 19:27:08.556147  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:08.556214  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.560101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.564014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:08.564084  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:08.594033  306747 cri.go:89] found id: ""
	I1017 19:27:08.594056  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.594071  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:08.594079  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:08.594135  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:08.620047  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:08.620113  306747 cri.go:89] found id: ""
	I1017 19:27:08.620142  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:08.620221  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.624310  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:08.624418  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:08.649502  306747 cri.go:89] found id: ""
	I1017 19:27:08.649567  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.649595  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:08.649623  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:08.649648  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:08.743803  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:08.743839  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:08.769242  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:08.769268  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:08.799565  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:08.799593  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:08.828556  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:08.828635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:08.846407  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:08.846438  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:08.930960  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:08.922375    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.923180    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925039    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925592    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.927335    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:08.922375    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.923180    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925039    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925592    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.927335    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:08.930984  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:08.930996  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:08.989884  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:08.989918  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:09.029740  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:09.029776  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:09.088750  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:09.088784  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:09.174757  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:09.174791  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:11.706527  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:11.717507  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:11.717580  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:11.742517  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:11.742540  306747 cri.go:89] found id: ""
	I1017 19:27:11.742548  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:11.742628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.746473  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:11.746545  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:11.778260  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:11.778322  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:11.778341  306747 cri.go:89] found id: ""
	I1017 19:27:11.778364  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:11.778435  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.782026  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.785484  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:11.785543  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:11.816069  306747 cri.go:89] found id: ""
	I1017 19:27:11.816094  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.816103  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:11.816109  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:11.816175  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:11.841738  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:11.841812  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:11.841832  306747 cri.go:89] found id: ""
	I1017 19:27:11.841848  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:11.841921  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.845737  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.849826  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:11.849962  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:11.877696  306747 cri.go:89] found id: ""
	I1017 19:27:11.877760  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.877783  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:11.877806  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:11.877878  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:11.905454  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:11.905478  306747 cri.go:89] found id: ""
	I1017 19:27:11.905487  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:11.905551  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.909271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:11.909371  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:11.937354  306747 cri.go:89] found id: ""
	I1017 19:27:11.937378  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.937388  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:11.937397  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:11.937408  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:11.964198  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:11.964227  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:12.047655  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:12.047711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:12.152282  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:12.152323  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:12.185576  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:12.185607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:12.216321  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:12.216350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:12.234007  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:12.234037  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:12.302472  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:12.293592    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.294322    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.296814    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.297401    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.299030    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:12.293592    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.294322    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.296814    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.297401    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.299030    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:12.302493  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:12.302508  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:12.361658  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:12.361692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:12.396422  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:12.396455  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:12.450643  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:12.450679  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:14.981141  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:14.992478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:14.992583  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:15.029616  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:15.029652  306747 cri.go:89] found id: ""
	I1017 19:27:15.029662  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:15.029733  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.034198  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:15.034280  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:15.067180  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:15.067204  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:15.067210  306747 cri.go:89] found id: ""
	I1017 19:27:15.067223  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:15.067278  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.071734  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.075202  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:15.075278  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:15.102244  306747 cri.go:89] found id: ""
	I1017 19:27:15.102269  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.102278  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:15.102285  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:15.102345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:15.130161  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:15.130189  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:15.130195  306747 cri.go:89] found id: ""
	I1017 19:27:15.130203  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:15.130258  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.134790  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.138971  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:15.139069  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:15.173861  306747 cri.go:89] found id: ""
	I1017 19:27:15.173886  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.173896  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:15.173903  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:15.173964  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:15.202641  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:15.202665  306747 cri.go:89] found id: ""
	I1017 19:27:15.202674  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:15.202732  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.206633  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:15.206702  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:15.234246  306747 cri.go:89] found id: ""
	I1017 19:27:15.234273  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.234283  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:15.234294  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:15.234305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:15.315039  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:15.315073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:15.418425  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:15.418463  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:15.436291  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:15.436322  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:15.508060  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:15.500418    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.501026    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502514    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502986    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.504397    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:15.500418    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.501026    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502514    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502986    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.504397    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:15.508127  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:15.508156  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:15.541312  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:15.541345  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:15.597746  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:15.597777  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:15.630514  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:15.630544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:15.662426  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:15.662454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:15.690843  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:15.690870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:15.737261  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:15.737305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.271724  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:18.282865  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:18.282933  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:18.310461  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:18.310530  306747 cri.go:89] found id: ""
	I1017 19:27:18.310545  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:18.310598  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.314206  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:18.314277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:18.343711  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:18.343736  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:18.343741  306747 cri.go:89] found id: ""
	I1017 19:27:18.343750  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:18.343827  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.347663  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.351287  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:18.351359  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:18.378302  306747 cri.go:89] found id: ""
	I1017 19:27:18.378329  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.378350  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:18.378356  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:18.378434  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:18.405852  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:18.405876  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.405881  306747 cri.go:89] found id: ""
	I1017 19:27:18.405889  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:18.405977  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.409609  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.413366  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:18.413434  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:18.438274  306747 cri.go:89] found id: ""
	I1017 19:27:18.438308  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.438332  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:18.438348  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:18.438428  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:18.465310  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:18.465379  306747 cri.go:89] found id: ""
	I1017 19:27:18.465394  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:18.465449  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.469114  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:18.469267  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:18.495209  306747 cri.go:89] found id: ""
	I1017 19:27:18.495236  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.495245  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:18.495254  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:18.495269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:18.521513  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:18.521541  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:18.551762  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:18.551788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:18.647502  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:18.647539  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:18.665784  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:18.665815  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:18.718577  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:18.718624  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:18.777594  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:18.777628  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.807963  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:18.807989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:18.892875  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:18.892910  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:18.960765  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:18.951643    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.952944    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.953536    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955189    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955840    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:18.951643    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.952944    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.953536    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955189    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955840    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:18.960787  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:18.960801  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:18.988908  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:18.988936  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:21.525356  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:21.536317  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:21.536383  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:21.562005  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:21.562074  306747 cri.go:89] found id: ""
	I1017 19:27:21.562089  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:21.562148  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.565814  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:21.565899  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:21.593641  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:21.593662  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:21.593668  306747 cri.go:89] found id: ""
	I1017 19:27:21.593675  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:21.593728  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.597715  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.601210  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:21.601286  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:21.626313  306747 cri.go:89] found id: ""
	I1017 19:27:21.626339  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.626349  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:21.626355  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:21.626413  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:21.658772  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:21.658794  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:21.658800  306747 cri.go:89] found id: ""
	I1017 19:27:21.658807  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:21.658866  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.662812  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.666487  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:21.666561  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:21.698844  306747 cri.go:89] found id: ""
	I1017 19:27:21.698905  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.698927  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:21.698951  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:21.699030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:21.728779  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:21.728838  306747 cri.go:89] found id: ""
	I1017 19:27:21.728865  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:21.728939  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.732581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:21.732691  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:21.758611  306747 cri.go:89] found id: ""
	I1017 19:27:21.758636  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.758645  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:21.758655  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:21.758685  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:21.853910  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:21.853951  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:21.929259  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:21.920729    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.921839    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923480    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923794    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.925410    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:21.920729    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.921839    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923480    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923794    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.925410    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:21.929281  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:21.929294  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:21.969445  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:21.969472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:22.060427  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:22.060560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:22.126121  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:22.126202  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:22.196425  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:22.196503  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:22.261955  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:22.262043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:22.285064  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:22.285159  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:22.339749  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:22.339827  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:22.385350  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:22.385427  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:24.966467  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:24.992294  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:24.992366  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:25.035727  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:25.035754  306747 cri.go:89] found id: ""
	I1017 19:27:25.035762  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:25.035847  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.040229  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:25.040304  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:25.088117  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:25.088145  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:25.088152  306747 cri.go:89] found id: ""
	I1017 19:27:25.088159  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:25.088215  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.092329  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.099299  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:25.099383  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:25.150822  306747 cri.go:89] found id: ""
	I1017 19:27:25.150858  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.150868  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:25.150878  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:25.150945  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:25.211825  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:25.211850  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:25.211855  306747 cri.go:89] found id: ""
	I1017 19:27:25.211863  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:25.211927  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.217398  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.221047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:25.221126  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:25.258850  306747 cri.go:89] found id: ""
	I1017 19:27:25.258885  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.258895  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:25.258904  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:25.258968  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:25.295477  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:25.295500  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:25.295512  306747 cri.go:89] found id: ""
	I1017 19:27:25.295520  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:25.295576  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.301386  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.305803  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:25.305873  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:25.334929  306747 cri.go:89] found id: ""
	I1017 19:27:25.334954  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.334970  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:25.334986  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:25.335006  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:25.365373  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:25.365402  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:25.382590  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:25.382626  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:25.432469  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:25.432570  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:25.478525  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:25.478601  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:25.551480  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:25.551560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:25.583783  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:25.583858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:25.679255  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:25.679301  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:25.739090  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:25.739118  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:25.854982  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:25.855021  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:25.955288  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:25.946765    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.947610    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949285    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949589    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.951072    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:25.946765    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.947610    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949285    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949589    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.951072    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:25.955307  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:25.955319  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:26.000458  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:26.000579  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:28.530525  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:28.542430  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:28.542500  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:28.570373  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:28.570394  306747 cri.go:89] found id: ""
	I1017 19:27:28.570402  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:28.570454  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.575832  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:28.575903  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:28.604287  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:28.604307  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:28.604313  306747 cri.go:89] found id: ""
	I1017 19:27:28.604320  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:28.604374  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.608248  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.612312  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:28.612380  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:28.638709  306747 cri.go:89] found id: ""
	I1017 19:27:28.638735  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.638743  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:28.638750  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:28.638807  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:28.665927  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:28.665951  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:28.665957  306747 cri.go:89] found id: ""
	I1017 19:27:28.665964  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:28.666022  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.669671  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.673220  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:28.673317  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:28.703161  306747 cri.go:89] found id: ""
	I1017 19:27:28.703188  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.703197  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:28.703204  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:28.703264  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:28.733314  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:28.733379  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:28.733389  306747 cri.go:89] found id: ""
	I1017 19:27:28.733397  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:28.733460  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.736998  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.740330  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:28.740444  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:28.765130  306747 cri.go:89] found id: ""
	I1017 19:27:28.765156  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.765165  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:28.765174  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:28.765216  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:28.834887  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:28.826610    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.827402    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829127    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829428    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.830934    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:28.826610    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.827402    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829127    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829428    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.830934    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:28.834910  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:28.834923  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:28.870142  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:28.870187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:28.912354  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:28.912388  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:28.968695  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:28.968728  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:29.009047  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:29.009078  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:29.036706  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:29.036734  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:29.120616  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:29.120654  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:29.153285  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:29.153313  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:29.250625  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:29.250664  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:29.271875  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:29.271907  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:29.321668  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:29.321703  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:31.848333  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:31.859324  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:31.859392  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:31.892308  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:31.892331  306747 cri.go:89] found id: ""
	I1017 19:27:31.892347  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:31.892401  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.896342  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:31.896433  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:31.924335  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:31.924359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:31.924364  306747 cri.go:89] found id: ""
	I1017 19:27:31.924371  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:31.924446  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.928119  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.931375  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:31.931444  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:31.961757  306747 cri.go:89] found id: ""
	I1017 19:27:31.961783  306747 logs.go:282] 0 containers: []
	W1017 19:27:31.961792  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:31.961800  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:31.961857  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:31.990900  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:31.990924  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:31.990929  306747 cri.go:89] found id: ""
	I1017 19:27:31.990937  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:31.990997  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.994670  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.998160  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:31.998292  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:32.030448  306747 cri.go:89] found id: ""
	I1017 19:27:32.030523  306747 logs.go:282] 0 containers: []
	W1017 19:27:32.030539  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:32.030548  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:32.030615  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:32.062242  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:32.062267  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:32.062272  306747 cri.go:89] found id: ""
	I1017 19:27:32.062280  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:32.062332  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:32.066062  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:32.069606  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:32.069682  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:32.102492  306747 cri.go:89] found id: ""
	I1017 19:27:32.102534  306747 logs.go:282] 0 containers: []
	W1017 19:27:32.102544  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:32.102553  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:32.102566  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:32.179017  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:32.170484    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.170960    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172496    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172884    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.174718    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:32.170484    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.170960    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172496    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172884    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.174718    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:32.179037  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:32.179050  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:32.225447  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:32.225475  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:32.270526  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:32.270557  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:32.304149  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:32.304181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:32.330757  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:32.330837  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:32.410571  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:32.410610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:32.443417  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:32.443444  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:32.461860  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:32.461890  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:32.510037  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:32.510083  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:32.569278  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:32.569325  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:32.602243  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:32.602269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:35.200643  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:35.211574  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:35.211646  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:35.243134  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:35.243158  306747 cri.go:89] found id: ""
	I1017 19:27:35.243166  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:35.243222  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.247054  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:35.247144  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:35.276216  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:35.276237  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:35.276243  306747 cri.go:89] found id: ""
	I1017 19:27:35.276251  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:35.276304  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.280057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.284007  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:35.284080  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:35.310830  306747 cri.go:89] found id: ""
	I1017 19:27:35.310909  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.310932  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:35.310955  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:35.311062  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:35.354572  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:35.354597  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:35.354602  306747 cri.go:89] found id: ""
	I1017 19:27:35.354610  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:35.354666  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.358450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.361871  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:35.361942  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:35.389041  306747 cri.go:89] found id: ""
	I1017 19:27:35.389065  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.389073  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:35.389079  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:35.389137  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:35.415942  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:35.415967  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:35.415972  306747 cri.go:89] found id: ""
	I1017 19:27:35.415980  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:35.416037  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.419700  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.423643  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:35.423765  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:35.450381  306747 cri.go:89] found id: ""
	I1017 19:27:35.450404  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.450413  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:35.450422  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:35.450435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:35.478252  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:35.478280  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:35.522590  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:35.522623  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:35.578335  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:35.578372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:35.613061  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:35.613091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:35.638492  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:35.638520  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:35.722854  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:35.722891  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:35.757639  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:35.757672  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:35.863697  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:35.863735  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:35.940574  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:35.932704    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.933394    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935016    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935464    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.936965    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:35.932704    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.933394    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935016    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935464    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.936965    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:35.940597  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:35.940610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:35.976992  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:35.977024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:36.004857  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:36.004894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:38.527370  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:38.538426  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:38.538499  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:38.564462  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:38.564484  306747 cri.go:89] found id: ""
	I1017 19:27:38.564504  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:38.564583  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.568393  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:38.568469  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:38.593756  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:38.593785  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:38.593790  306747 cri.go:89] found id: ""
	I1017 19:27:38.593797  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:38.593850  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.597636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.601069  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:38.601138  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:38.628357  306747 cri.go:89] found id: ""
	I1017 19:27:38.628382  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.628391  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:38.628398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:38.628455  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:38.653998  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:38.654020  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:38.654025  306747 cri.go:89] found id: ""
	I1017 19:27:38.654033  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:38.654092  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.658000  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.661429  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:38.661500  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:38.687831  306747 cri.go:89] found id: ""
	I1017 19:27:38.687857  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.687866  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:38.687873  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:38.687939  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:38.728871  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:38.728893  306747 cri.go:89] found id: ""
	I1017 19:27:38.728902  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:38.728956  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.732553  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:38.732626  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:38.758108  306747 cri.go:89] found id: ""
	I1017 19:27:38.758131  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.758139  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:38.758149  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:38.758160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:38.856927  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:38.857005  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:38.875545  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:38.875575  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:38.948879  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:38.941082    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.941735    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943798    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.945334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:38.941082    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.941735    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943798    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.945334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:38.948901  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:38.948914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:38.997335  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:38.997372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:39.029015  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:39.029043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:39.108011  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:39.108046  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:39.141940  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:39.141971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:39.170446  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:39.170472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:39.208445  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:39.208481  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:39.272902  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:39.272952  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:41.807281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:41.817677  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:41.817808  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:41.847030  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:41.847052  306747 cri.go:89] found id: ""
	I1017 19:27:41.847060  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:41.847141  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.856702  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:41.856768  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:41.882291  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:41.882314  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:41.882320  306747 cri.go:89] found id: ""
	I1017 19:27:41.882337  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:41.882441  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.886489  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.896574  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:41.896698  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:41.922724  306747 cri.go:89] found id: ""
	I1017 19:27:41.922748  306747 logs.go:282] 0 containers: []
	W1017 19:27:41.922757  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:41.922763  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:41.922817  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:41.948998  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:41.949024  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:41.949030  306747 cri.go:89] found id: ""
	I1017 19:27:41.949038  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:41.949090  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.961165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.965546  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:41.965617  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:41.994892  306747 cri.go:89] found id: ""
	I1017 19:27:41.994917  306747 logs.go:282] 0 containers: []
	W1017 19:27:41.994935  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:41.994943  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:41.995002  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:42.028588  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:42.028626  306747 cri.go:89] found id: ""
	I1017 19:27:42.028636  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:42.028712  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:42.035671  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:42.035764  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:42.067030  306747 cri.go:89] found id: ""
	I1017 19:27:42.067061  306747 logs.go:282] 0 containers: []
	W1017 19:27:42.067072  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:42.067081  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:42.067105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:42.109133  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:42.109175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:42.199861  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:42.199955  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:42.342289  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:42.342335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:42.363849  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:42.363906  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:42.441824  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:42.432639    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.433836    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.434718    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436054    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436745    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:42.432639    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.433836    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.434718    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436054    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436745    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:42.441858  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:42.441872  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:42.471376  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:42.471404  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:42.516923  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:42.516960  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:42.595252  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:42.595288  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:42.623727  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:42.623757  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:42.665018  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:42.665048  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:45.203111  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:45.228005  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:45.228167  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:45.284064  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:45.284089  306747 cri.go:89] found id: ""
	I1017 19:27:45.284098  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:45.284165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.293975  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:45.294167  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:45.366214  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:45.366372  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:45.366394  306747 cri.go:89] found id: ""
	I1017 19:27:45.366421  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:45.366520  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.385006  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.397052  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:45.397258  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:45.444612  306747 cri.go:89] found id: ""
	I1017 19:27:45.444689  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.444712  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:45.444737  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:45.444839  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:45.475398  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:45.475418  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:45.475422  306747 cri.go:89] found id: ""
	I1017 19:27:45.475430  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:45.475483  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.480459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.484700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:45.484826  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:45.516264  306747 cri.go:89] found id: ""
	I1017 19:27:45.516289  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.516298  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:45.516305  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:45.516385  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:45.545867  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:45.545891  306747 cri.go:89] found id: ""
	I1017 19:27:45.545900  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:45.545955  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.549781  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:45.549898  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:45.578811  306747 cri.go:89] found id: ""
	I1017 19:27:45.578837  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.578847  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:45.578857  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:45.578870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:45.605475  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:45.605507  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:45.687039  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:45.687081  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:45.755076  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:45.746538    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.747381    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749046    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749635    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.751252    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:45.746538    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.747381    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749046    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749635    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.751252    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:45.755099  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:45.755114  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:45.784001  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:45.784034  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:45.837928  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:45.837964  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:45.914633  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:45.914670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:45.950096  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:45.950123  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:46.054149  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:46.054194  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:46.072594  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:46.072628  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:46.111999  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:46.112030  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:48.642924  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:48.653451  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:48.653519  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:48.679639  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:48.679659  306747 cri.go:89] found id: ""
	I1017 19:27:48.679667  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:48.679720  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.683701  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:48.683775  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:48.711679  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:48.711701  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:48.711707  306747 cri.go:89] found id: ""
	I1017 19:27:48.711714  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:48.711767  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.715462  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.718828  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:48.718914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:48.745090  306747 cri.go:89] found id: ""
	I1017 19:27:48.745156  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.745170  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:48.745178  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:48.745236  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:48.772250  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:48.772273  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:48.772278  306747 cri.go:89] found id: ""
	I1017 19:27:48.772286  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:48.772344  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.776030  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.779386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:48.779454  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:48.805859  306747 cri.go:89] found id: ""
	I1017 19:27:48.805884  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.805893  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:48.805900  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:48.805957  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:48.831953  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:48.831975  306747 cri.go:89] found id: ""
	I1017 19:27:48.831984  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:48.832040  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.835702  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:48.835770  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:48.869137  306747 cri.go:89] found id: ""
	I1017 19:27:48.869159  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.869168  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:48.869177  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:48.869190  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:48.910676  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:48.910711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:48.972655  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:48.972690  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:49.013320  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:49.013350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:49.093756  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:49.093796  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:49.137959  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:49.137988  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:49.207174  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:49.198952    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.199631    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201291    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201757    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.203195    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:49.198952    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.199631    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201291    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201757    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.203195    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:49.207199  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:49.207215  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:49.255066  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:49.255135  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:49.283732  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:49.283760  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:49.395846  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:49.395882  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:49.414130  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:49.414161  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:51.941734  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:51.953584  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:51.953657  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:51.984051  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:51.984073  306747 cri.go:89] found id: ""
	I1017 19:27:51.984081  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:51.984225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:51.989195  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:51.989276  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:52.018264  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:52.018291  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:52.018296  306747 cri.go:89] found id: ""
	I1017 19:27:52.018305  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:52.018390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.022319  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.026112  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:52.026196  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:52.054070  306747 cri.go:89] found id: ""
	I1017 19:27:52.054097  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.054107  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:52.054114  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:52.054234  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:52.091016  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:52.091040  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:52.091045  306747 cri.go:89] found id: ""
	I1017 19:27:52.091052  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:52.091109  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.095213  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.098982  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:52.099079  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:52.126556  306747 cri.go:89] found id: ""
	I1017 19:27:52.126590  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.126601  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:52.126607  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:52.126676  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:52.158449  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:52.158473  306747 cri.go:89] found id: ""
	I1017 19:27:52.158482  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:52.158543  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.162572  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:52.162647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:52.192007  306747 cri.go:89] found id: ""
	I1017 19:27:52.192033  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.192042  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:52.192052  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:52.192066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:52.209934  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:52.209966  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:52.285387  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:52.276095    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.276908    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.278520    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.279497    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.280119    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:52.276095    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.276908    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.278520    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.279497    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.280119    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:52.285410  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:52.285426  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:52.314784  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:52.314812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:52.349858  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:52.349896  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:52.417120  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:52.417160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:52.447498  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:52.447525  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:52.525405  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:52.525442  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:52.568336  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:52.568364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:52.667592  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:52.667629  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:52.714508  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:52.714544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.241965  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:55.252843  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:55.252914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:55.281150  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:55.281173  306747 cri.go:89] found id: ""
	I1017 19:27:55.281181  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:55.281254  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.285436  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:55.285508  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:55.311561  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:55.311585  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:55.311590  306747 cri.go:89] found id: ""
	I1017 19:27:55.311598  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:55.311654  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.315303  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.318720  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:55.318789  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:55.342910  306747 cri.go:89] found id: ""
	I1017 19:27:55.342937  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.342946  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:55.342953  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:55.343012  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:55.369108  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:55.369130  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:55.369136  306747 cri.go:89] found id: ""
	I1017 19:27:55.369154  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:55.369212  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.372980  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.376499  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:55.376598  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:55.409872  306747 cri.go:89] found id: ""
	I1017 19:27:55.409898  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.409907  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:55.409914  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:55.409970  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:55.435703  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.435725  306747 cri.go:89] found id: ""
	I1017 19:27:55.435734  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:55.435787  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.439520  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:55.439587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:55.466991  306747 cri.go:89] found id: ""
	I1017 19:27:55.467017  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.467026  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:55.467036  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:55.467048  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.492985  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:55.493014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:55.566914  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:55.566950  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:55.643727  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:55.635444    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.636184    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.637061    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638074    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638650    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:55.635444    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.636184    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.637061    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638074    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638650    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:55.643796  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:55.643817  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:55.670365  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:55.670394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:55.705898  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:55.705936  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:55.732124  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:55.732152  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:55.762958  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:55.762987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:55.857491  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:55.857528  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:55.875620  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:55.875658  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:55.953454  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:55.953501  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:58.520452  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:58.530935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:58.531015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:58.557433  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:58.557455  306747 cri.go:89] found id: ""
	I1017 19:27:58.557464  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:58.557521  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.561276  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:58.561345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:58.587982  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:58.588006  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:58.588011  306747 cri.go:89] found id: ""
	I1017 19:27:58.588018  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:58.588072  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.591894  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.595410  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:58.595490  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:58.620930  306747 cri.go:89] found id: ""
	I1017 19:27:58.620956  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.620966  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:58.620972  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:58.621038  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:58.646484  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:58.646509  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:58.646514  306747 cri.go:89] found id: ""
	I1017 19:27:58.646522  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:58.646573  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.650281  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.653491  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:58.653564  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:58.679227  306747 cri.go:89] found id: ""
	I1017 19:27:58.679251  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.679261  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:58.679271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:58.679329  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:58.712878  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:58.712901  306747 cri.go:89] found id: ""
	I1017 19:27:58.712910  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:58.712965  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.717668  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:58.717744  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:58.743926  306747 cri.go:89] found id: ""
	I1017 19:27:58.743950  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.743960  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:58.743969  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:58.743981  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:58.816251  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:58.808176    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.809065    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810666    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810959    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.812492    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:58.808176    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.809065    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810666    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810959    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.812492    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:58.816275  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:58.816289  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:58.880149  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:58.880187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:58.926347  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:58.926379  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:58.959298  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:58.959326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:58.985914  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:58.985941  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:59.060169  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:59.060206  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:59.098174  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:59.098204  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:59.193263  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:59.193298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:59.223428  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:59.223461  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:59.282679  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:59.282714  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:01.802237  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:01.814388  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:01.814466  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:01.840376  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:01.840398  306747 cri.go:89] found id: ""
	I1017 19:28:01.840412  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:01.840465  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.844426  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:01.844496  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:01.873063  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:01.873085  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:01.873090  306747 cri.go:89] found id: ""
	I1017 19:28:01.873098  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:01.873155  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.877190  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.881085  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:01.881173  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:01.908701  306747 cri.go:89] found id: ""
	I1017 19:28:01.908726  306747 logs.go:282] 0 containers: []
	W1017 19:28:01.908736  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:01.908742  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:01.908799  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:01.936306  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:01.936330  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:01.936335  306747 cri.go:89] found id: ""
	I1017 19:28:01.936343  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:01.936397  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.940768  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.946060  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:01.946131  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:01.974191  306747 cri.go:89] found id: ""
	I1017 19:28:01.974217  306747 logs.go:282] 0 containers: []
	W1017 19:28:01.974227  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:01.974234  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:01.974299  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:02.003021  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:02.003047  306747 cri.go:89] found id: ""
	I1017 19:28:02.003056  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:02.003132  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:02.016728  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:02.016803  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:02.046662  306747 cri.go:89] found id: ""
	I1017 19:28:02.046688  306747 logs.go:282] 0 containers: []
	W1017 19:28:02.046697  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:02.046708  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:02.046744  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:02.076638  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:02.076670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:02.097353  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:02.097384  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:02.149812  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:02.149852  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:02.212958  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:02.212995  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:02.242664  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:02.242692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:02.329225  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:02.329262  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:02.364870  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:02.364906  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:02.472339  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:02.472377  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:02.541865  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:02.533392    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.534027    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.535792    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.536454    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.537580    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:02.533392    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.534027    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.535792    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.536454    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.537580    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:02.541887  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:02.541900  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:02.570859  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:02.570888  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.110395  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:05.121645  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:05.121716  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:05.153742  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:05.153766  306747 cri.go:89] found id: ""
	I1017 19:28:05.153775  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:05.153829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.157576  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:05.157647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:05.184788  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:05.184810  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.184815  306747 cri.go:89] found id: ""
	I1017 19:28:05.184823  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:05.184878  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.188586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.192151  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:05.192222  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:05.222405  306747 cri.go:89] found id: ""
	I1017 19:28:05.222437  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.222447  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:05.222453  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:05.222512  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:05.251383  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:05.251408  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:05.251413  306747 cri.go:89] found id: ""
	I1017 19:28:05.251421  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:05.251474  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.255443  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.258903  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:05.258971  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:05.289906  306747 cri.go:89] found id: ""
	I1017 19:28:05.289983  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.289999  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:05.290007  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:05.290065  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:05.317057  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:05.317122  306747 cri.go:89] found id: ""
	I1017 19:28:05.317136  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:05.317202  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.320997  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:05.321071  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:05.350310  306747 cri.go:89] found id: ""
	I1017 19:28:05.350335  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.350344  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:05.350353  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:05.350364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:05.387607  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:05.387637  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:05.456949  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:05.448355    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.449098    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.450777    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.451358    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.452970    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:05.448355    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.449098    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.450777    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.451358    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.452970    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:05.457018  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:05.457045  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:05.484064  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:05.484139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:05.543816  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:05.543851  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:05.573032  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:05.573058  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:05.651816  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:05.651853  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:05.753730  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:05.753765  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:05.772288  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:05.772320  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:05.827946  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:05.827982  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.872696  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:05.872731  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.406970  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:08.417284  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:08.417352  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:08.443772  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:08.443796  306747 cri.go:89] found id: ""
	I1017 19:28:08.443815  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:08.443868  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.447541  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:08.447633  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:08.472976  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:08.473004  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:08.473009  306747 cri.go:89] found id: ""
	I1017 19:28:08.473017  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:08.473070  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.476664  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.480025  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:08.480095  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:08.507100  306747 cri.go:89] found id: ""
	I1017 19:28:08.507122  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.507130  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:08.507136  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:08.507194  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:08.532864  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:08.532888  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.532895  306747 cri.go:89] found id: ""
	I1017 19:28:08.532912  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:08.532966  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.536602  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.540037  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:08.540108  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:08.566233  306747 cri.go:89] found id: ""
	I1017 19:28:08.566258  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.566267  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:08.566273  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:08.566348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:08.593545  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:08.593568  306747 cri.go:89] found id: ""
	I1017 19:28:08.593577  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:08.593630  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.597170  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:08.597251  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:08.622805  306747 cri.go:89] found id: ""
	I1017 19:28:08.622829  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.622838  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:08.622847  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:08.622886  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:08.718117  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:08.718158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:08.736317  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:08.736358  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:08.785165  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:08.785200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.813123  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:08.813154  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:08.842670  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:08.842698  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:08.883049  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:08.883081  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:08.948658  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:08.940826    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.941602    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943150    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943452    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.944921    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:08.940826    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.941602    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943150    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943452    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.944921    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:08.948680  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:08.948693  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:08.975235  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:08.975261  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:09.023572  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:09.023607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:09.085674  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:09.085713  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:11.674341  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:11.684867  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:11.684937  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:11.710235  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:11.710258  306747 cri.go:89] found id: ""
	I1017 19:28:11.710266  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:11.710317  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.713823  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:11.713893  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:11.743536  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:11.743557  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:11.743564  306747 cri.go:89] found id: ""
	I1017 19:28:11.743571  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:11.743623  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.747225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.750360  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:11.750423  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:11.775489  306747 cri.go:89] found id: ""
	I1017 19:28:11.775553  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.775575  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:11.775599  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:11.775689  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:11.804973  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:11.804993  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:11.804999  306747 cri.go:89] found id: ""
	I1017 19:28:11.805007  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:11.805064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.809085  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.812425  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:11.812493  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:11.839019  306747 cri.go:89] found id: ""
	I1017 19:28:11.839042  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.839051  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:11.839057  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:11.839113  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:11.867946  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:11.868012  306747 cri.go:89] found id: ""
	I1017 19:28:11.868036  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:11.868125  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.871735  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:11.871847  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:11.917369  306747 cri.go:89] found id: ""
	I1017 19:28:11.917435  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.917448  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:11.917458  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:11.917473  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:12.015837  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:12.015876  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:12.037612  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:12.037645  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:12.066665  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:12.066695  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:12.124283  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:12.124321  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:12.157456  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:12.157487  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:12.218566  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:12.218603  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:12.246576  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:12.246601  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:12.323228  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:12.323263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:12.389358  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:12.381335    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.382085    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.383576    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.384016    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.385432    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:12.381335    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.382085    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.383576    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.384016    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.385432    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:12.389381  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:12.389394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:12.420218  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:12.420248  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:14.967518  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:14.978398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:14.978489  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:15.008833  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:15.008861  306747 cri.go:89] found id: ""
	I1017 19:28:15.008869  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:15.008962  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.019024  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:15.019115  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:15.048619  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:15.048641  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:15.048646  306747 cri.go:89] found id: ""
	I1017 19:28:15.048653  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:15.048711  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.052829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.056849  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:15.056960  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:15.090614  306747 cri.go:89] found id: ""
	I1017 19:28:15.090646  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.090670  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:15.090679  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:15.090755  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:15.121287  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:15.121354  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:15.121367  306747 cri.go:89] found id: ""
	I1017 19:28:15.121376  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:15.121441  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.126749  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.130705  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:15.130786  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:15.158437  306747 cri.go:89] found id: ""
	I1017 19:28:15.158462  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.158472  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:15.158479  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:15.158542  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:15.187795  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:15.187819  306747 cri.go:89] found id: ""
	I1017 19:28:15.187828  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:15.187885  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.191939  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:15.192014  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:15.221830  306747 cri.go:89] found id: ""
	I1017 19:28:15.221856  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.221866  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:15.221875  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:15.221886  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:15.314949  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:15.314983  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:15.334443  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:15.334524  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:15.391124  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:15.391159  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:15.464757  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:15.464794  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:15.499089  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:15.499118  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:15.572721  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:15.572758  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:15.604780  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:15.604809  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:15.673978  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:15.665870    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.666574    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668276    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668888    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.670272    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:15.665870    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.666574    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668276    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668888    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.670272    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:15.674001  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:15.674014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:15.703550  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:15.703577  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:15.736137  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:15.736167  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.272459  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:18.284130  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:18.284202  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:18.317045  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:18.317114  306747 cri.go:89] found id: ""
	I1017 19:28:18.317140  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:18.317200  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.320946  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:18.321021  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:18.349966  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:18.350047  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:18.350069  306747 cri.go:89] found id: ""
	I1017 19:28:18.350078  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:18.350146  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.354094  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.357736  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:18.357840  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:18.389890  306747 cri.go:89] found id: ""
	I1017 19:28:18.389914  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.389923  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:18.389929  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:18.389990  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:18.416552  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:18.416573  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.416577  306747 cri.go:89] found id: ""
	I1017 19:28:18.416584  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:18.416636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.421408  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.425021  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:18.425127  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:18.451716  306747 cri.go:89] found id: ""
	I1017 19:28:18.451744  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.451754  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:18.451760  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:18.451824  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:18.486286  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:18.486355  306747 cri.go:89] found id: ""
	I1017 19:28:18.486370  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:18.486424  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.490097  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:18.490214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:18.517834  306747 cri.go:89] found id: ""
	I1017 19:28:18.517859  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.517868  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:18.517877  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:18.517907  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:18.569373  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:18.569412  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:18.597414  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:18.597442  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:18.615623  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:18.615651  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:18.687384  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:18.679364    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.680188    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.681715    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.682200    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.683729    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:18.679364    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.680188    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.681715    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.682200    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.683729    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:18.687406  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:18.687420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:18.724107  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:18.724135  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:18.757798  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:18.757832  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:18.823518  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:18.823556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.868332  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:18.868358  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:18.948355  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:18.948391  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:18.980022  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:18.980052  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:21.580647  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:21.591760  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:21.591828  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:21.619734  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:21.619755  306747 cri.go:89] found id: ""
	I1017 19:28:21.619763  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:21.619822  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.623634  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:21.623706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:21.650174  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:21.650202  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:21.650207  306747 cri.go:89] found id: ""
	I1017 19:28:21.650215  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:21.650275  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.654337  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.658320  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:21.658390  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:21.685562  306747 cri.go:89] found id: ""
	I1017 19:28:21.685587  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.685596  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:21.685602  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:21.685696  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:21.711151  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:21.711175  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:21.711180  306747 cri.go:89] found id: ""
	I1017 19:28:21.711188  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:21.711241  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.714981  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.718517  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:21.718587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:21.745770  306747 cri.go:89] found id: ""
	I1017 19:28:21.745796  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.745805  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:21.745812  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:21.745872  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:21.773020  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:21.773042  306747 cri.go:89] found id: ""
	I1017 19:28:21.773052  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:21.773107  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.776980  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:21.777073  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:21.805110  306747 cri.go:89] found id: ""
	I1017 19:28:21.805137  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.805146  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:21.805156  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:21.805187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:21.915295  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:21.915339  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:21.934521  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:21.934553  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:21.971829  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:21.971867  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:22.032460  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:22.032500  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:22.069813  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:22.069901  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:22.150515  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:22.150553  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:22.186817  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:22.186843  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:22.250982  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:22.242783    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.243418    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.244975    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.245572    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.247184    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:22.242783    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.243418    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.244975    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.245572    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.247184    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:22.251005  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:22.251019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:22.318367  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:22.318403  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:22.359962  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:22.359991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:24.888496  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:24.899632  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:24.899701  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:24.927106  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:24.927126  306747 cri.go:89] found id: ""
	I1017 19:28:24.927135  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:24.927191  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.930789  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:24.930901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:24.957962  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:24.957986  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:24.957992  306747 cri.go:89] found id: ""
	I1017 19:28:24.958000  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:24.958052  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.961689  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.965312  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:24.965388  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:24.999567  306747 cri.go:89] found id: ""
	I1017 19:28:24.999646  306747 logs.go:282] 0 containers: []
	W1017 19:28:24.999670  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:24.999692  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:24.999784  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:25.030377  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:25.030447  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:25.030466  306747 cri.go:89] found id: ""
	I1017 19:28:25.030493  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:25.030587  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.034492  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.038213  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:25.038307  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:25.064926  306747 cri.go:89] found id: ""
	I1017 19:28:25.065005  306747 logs.go:282] 0 containers: []
	W1017 19:28:25.065022  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:25.065029  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:25.065092  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:25.104761  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:25.104835  306747 cri.go:89] found id: ""
	I1017 19:28:25.104851  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:25.104908  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.109062  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:25.109153  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:25.137891  306747 cri.go:89] found id: ""
	I1017 19:28:25.137923  306747 logs.go:282] 0 containers: []
	W1017 19:28:25.137931  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:25.137940  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:25.137953  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:25.170975  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:25.171007  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:25.204002  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:25.204031  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:25.297840  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:25.297914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:25.315642  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:25.315682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:25.369974  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:25.370011  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:25.452713  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:25.452749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:25.483409  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:25.483439  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:25.558385  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:25.550412    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.551034    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.552731    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.553294    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.554883    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:25.550412    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.551034    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.552731    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.553294    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.554883    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:25.558408  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:25.558421  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:25.585961  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:25.585989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:25.617689  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:25.617720  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.181797  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:28.193078  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:28.193193  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:28.220858  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:28.220880  306747 cri.go:89] found id: ""
	I1017 19:28:28.220889  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:28.220949  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.224889  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:28.224962  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:28.256761  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:28.256782  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:28.256787  306747 cri.go:89] found id: ""
	I1017 19:28:28.256795  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:28.256849  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.261049  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.264952  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:28.265076  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:28.291441  306747 cri.go:89] found id: ""
	I1017 19:28:28.291509  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.291533  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:28.291556  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:28.291641  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:28.318704  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.318768  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:28.318790  306747 cri.go:89] found id: ""
	I1017 19:28:28.318815  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:28.318904  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.323349  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.327034  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:28.327096  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:28.357958  306747 cri.go:89] found id: ""
	I1017 19:28:28.357983  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.357992  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:28.358001  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:28.358059  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:28.384163  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:28.384187  306747 cri.go:89] found id: ""
	I1017 19:28:28.384196  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:28.384262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.387976  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:28.388088  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:28.414600  306747 cri.go:89] found id: ""
	I1017 19:28:28.414625  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.414635  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:28.414644  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:28.414655  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:28.478712  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:28.469484    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.470334    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.472333    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.473060    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.474868    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:28.469484    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.470334    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.472333    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.473060    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.474868    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:28.478736  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:28.478749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:28.504392  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:28.504432  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.566111  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:28.566147  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:28.597513  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:28.597544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:28.676314  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:28.676352  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:28.779140  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:28.779181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:28.830823  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:28.830858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:28.873192  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:28.873224  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:28.907594  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:28.907621  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:28.939159  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:28.939188  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:31.457173  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:31.468390  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:31.468462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:31.500159  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:31.500183  306747 cri.go:89] found id: ""
	I1017 19:28:31.500191  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:31.500245  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.503981  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:31.504051  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:31.529707  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:31.529735  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:31.529740  306747 cri.go:89] found id: ""
	I1017 19:28:31.529748  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:31.529810  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.533478  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.536973  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:31.537042  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:31.562894  306747 cri.go:89] found id: ""
	I1017 19:28:31.562920  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.562929  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:31.562936  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:31.562996  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:31.591920  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:31.591943  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:31.591949  306747 cri.go:89] found id: ""
	I1017 19:28:31.591956  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:31.592011  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.595596  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.598999  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:31.599093  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:31.631142  306747 cri.go:89] found id: ""
	I1017 19:28:31.631164  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.631173  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:31.631179  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:31.631264  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:31.657995  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:31.658017  306747 cri.go:89] found id: ""
	I1017 19:28:31.658026  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:31.658077  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.661797  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:31.661866  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:31.687995  306747 cri.go:89] found id: ""
	I1017 19:28:31.688019  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.688028  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:31.688037  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:31.688049  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:31.714258  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:31.714288  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:31.743480  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:31.743510  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:31.839126  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:31.839165  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:31.865944  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:31.865971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:31.923800  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:31.923834  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:32.015198  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:32.015258  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:32.108618  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:32.108656  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:32.127026  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:32.127056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:32.197465  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:32.189288    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.190038    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191643    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191956    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.193464    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:32.189288    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.190038    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191643    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191956    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.193464    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:32.197487  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:32.197501  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:32.230297  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:32.230333  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:34.763313  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:34.773938  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:34.774008  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:34.801473  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:34.801491  306747 cri.go:89] found id: ""
	I1017 19:28:34.801498  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:34.801568  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.805380  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:34.805451  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:34.831939  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:34.831964  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:34.831968  306747 cri.go:89] found id: ""
	I1017 19:28:34.831976  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:34.832034  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.836223  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.839881  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:34.839985  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:34.867700  306747 cri.go:89] found id: ""
	I1017 19:28:34.867725  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.867735  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:34.867741  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:34.867826  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:34.898720  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:34.898743  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:34.898748  306747 cri.go:89] found id: ""
	I1017 19:28:34.898756  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:34.898827  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.902459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.905896  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:34.905974  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:34.933166  306747 cri.go:89] found id: ""
	I1017 19:28:34.933242  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.933258  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:34.933266  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:34.933326  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:34.961978  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:34.962067  306747 cri.go:89] found id: ""
	I1017 19:28:34.962091  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:34.962173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.966069  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:34.966147  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:34.993526  306747 cri.go:89] found id: ""
	I1017 19:28:34.993565  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.993574  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:34.993583  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:34.993594  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:35.023086  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:35.023173  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:35.057614  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:35.057652  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:35.126909  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:35.126944  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:35.207646  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:35.207681  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:35.240791  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:35.240824  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:35.259253  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:35.259285  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:35.327544  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:35.319793    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.320443    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.321977    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.322405    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.323890    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:35.319793    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.320443    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.321977    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.322405    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.323890    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:35.327566  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:35.327579  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:35.377112  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:35.377150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:35.405892  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:35.405920  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:35.431201  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:35.431230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:38.030766  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:38.042946  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:38.043015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:38.074181  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:38.074215  306747 cri.go:89] found id: ""
	I1017 19:28:38.074224  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:38.074287  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.079011  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:38.079083  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:38.108493  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:38.108592  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:38.108612  306747 cri.go:89] found id: ""
	I1017 19:28:38.108636  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:38.108721  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.112489  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.115918  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:38.116030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:38.146192  306747 cri.go:89] found id: ""
	I1017 19:28:38.146215  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.146225  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:38.146233  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:38.146315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:38.178299  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:38.178363  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:38.178375  306747 cri.go:89] found id: ""
	I1017 19:28:38.178382  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:38.178438  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.182144  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.185723  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:38.185785  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:38.210486  306747 cri.go:89] found id: ""
	I1017 19:28:38.210509  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.210518  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:38.210524  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:38.210578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:38.240550  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:38.240573  306747 cri.go:89] found id: ""
	I1017 19:28:38.240581  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:38.240633  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.246616  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:38.246710  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:38.272684  306747 cri.go:89] found id: ""
	I1017 19:28:38.272710  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.272719  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:38.272728  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:38.272759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:38.291309  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:38.291338  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:38.362093  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:38.354481    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.355177    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.356720    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.357017    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.358292    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:38.354481    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.355177    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.356720    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.357017    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.358292    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:38.362115  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:38.362136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:38.388487  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:38.388541  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:38.460507  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:38.460545  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:38.493438  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:38.493472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:38.519348  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:38.519378  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:38.547771  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:38.547800  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:38.646739  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:38.646779  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:38.711727  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:38.711765  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:38.794605  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:38.794645  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:41.329100  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:41.340102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:41.340191  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:41.378237  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:41.378304  306747 cri.go:89] found id: ""
	I1017 19:28:41.378327  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:41.378411  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.382295  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:41.382433  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:41.413432  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:41.413454  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:41.413459  306747 cri.go:89] found id: ""
	I1017 19:28:41.413483  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:41.413541  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.417349  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.420940  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:41.421030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:41.447730  306747 cri.go:89] found id: ""
	I1017 19:28:41.447754  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.447763  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:41.447769  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:41.447917  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:41.473491  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:41.473514  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:41.473520  306747 cri.go:89] found id: ""
	I1017 19:28:41.473527  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:41.473602  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.477615  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.481139  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:41.481211  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:41.507258  306747 cri.go:89] found id: ""
	I1017 19:28:41.507283  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.507292  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:41.507300  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:41.507356  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:41.537051  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:41.537073  306747 cri.go:89] found id: ""
	I1017 19:28:41.537082  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:41.537134  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.540852  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:41.540920  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:41.567361  306747 cri.go:89] found id: ""
	I1017 19:28:41.567389  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.567398  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:41.567407  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:41.567419  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:41.599142  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:41.599172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:41.635743  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:41.635773  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:41.654302  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:41.654331  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:41.717143  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:41.717179  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:41.792345  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:41.792380  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:41.871479  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:41.871517  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:41.975433  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:41.975512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:42.054059  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:42.044191    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.045351    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.046050    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.047965    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.048651    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:42.044191    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.045351    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.046050    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.047965    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.048651    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:42.054083  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:42.054106  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:42.089914  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:42.089944  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:42.149148  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:42.149200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:44.709425  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:44.719908  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:44.719977  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:44.763510  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:44.763534  306747 cri.go:89] found id: ""
	I1017 19:28:44.763541  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:44.763594  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.767241  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:44.767313  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:44.795651  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:44.795675  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:44.795681  306747 cri.go:89] found id: ""
	I1017 19:28:44.795689  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:44.795742  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.800272  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.804452  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:44.804565  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:44.839339  306747 cri.go:89] found id: ""
	I1017 19:28:44.839371  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.839379  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:44.839386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:44.839452  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:44.875066  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:44.875099  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:44.875105  306747 cri.go:89] found id: ""
	I1017 19:28:44.875139  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:44.875214  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.880309  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.883914  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:44.884020  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:44.917517  306747 cri.go:89] found id: ""
	I1017 19:28:44.917586  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.917614  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:44.917638  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:44.917727  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:44.946317  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:44.946393  306747 cri.go:89] found id: ""
	I1017 19:28:44.946416  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:44.946496  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.950194  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:44.950311  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:44.976935  306747 cri.go:89] found id: ""
	I1017 19:28:44.977000  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.977027  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:44.977054  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:44.977071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:45.083362  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:45.083465  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:45.185240  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:45.174155    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.175051    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.176949    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178114    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178917    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:45.174155    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.175051    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.176949    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178114    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178917    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:45.185281  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:45.185298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:45.229219  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:45.229247  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:45.303101  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:45.303141  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:45.395057  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:45.395208  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:45.422882  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:45.422938  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:45.465002  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:45.465035  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:45.501568  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:45.501600  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:45.530952  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:45.530983  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:45.610519  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:45.610560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:48.146542  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:48.158014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:48.158095  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:48.185610  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:48.185676  306747 cri.go:89] found id: ""
	I1017 19:28:48.185699  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:48.185773  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.189874  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:48.189975  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:48.216931  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:48.216997  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:48.217020  306747 cri.go:89] found id: ""
	I1017 19:28:48.217044  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:48.217112  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.220961  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.224622  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:48.224715  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:48.254633  306747 cri.go:89] found id: ""
	I1017 19:28:48.254660  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.254669  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:48.254676  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:48.254759  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:48.280918  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:48.280996  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:48.281017  306747 cri.go:89] found id: ""
	I1017 19:28:48.281033  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:48.281101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.285444  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.289246  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:48.289369  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:48.317150  306747 cri.go:89] found id: ""
	I1017 19:28:48.317216  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.317244  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:48.317275  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:48.317350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:48.347609  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:48.347643  306747 cri.go:89] found id: ""
	I1017 19:28:48.347652  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:48.347704  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.351509  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:48.351584  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:48.376680  306747 cri.go:89] found id: ""
	I1017 19:28:48.376708  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.376716  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:48.376726  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:48.376738  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:48.452752  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:48.452788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:48.484352  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:48.484382  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:48.510315  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:48.510344  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:48.571544  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:48.571578  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:48.609922  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:48.609951  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:48.642129  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:48.642158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:48.737103  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:48.737139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:48.755251  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:48.755324  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:48.826596  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:48.817740    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.818885    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.819683    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.820717    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.821339    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:48.817740    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.818885    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.819683    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.820717    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.821339    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:48.826621  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:48.826676  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:48.917412  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:48.917447  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.447884  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:51.458905  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:51.458975  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:51.486341  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:51.486364  306747 cri.go:89] found id: ""
	I1017 19:28:51.486373  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:51.486435  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.490132  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:51.490214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:51.515926  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:51.515950  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:51.515956  306747 cri.go:89] found id: ""
	I1017 19:28:51.515964  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:51.516033  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.520421  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.524078  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:51.524150  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:51.558659  306747 cri.go:89] found id: ""
	I1017 19:28:51.558683  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.558693  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:51.558700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:51.558754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:51.584326  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:51.584349  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:51.584355  306747 cri.go:89] found id: ""
	I1017 19:28:51.584362  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:51.584417  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.588059  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.591616  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:51.591692  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:51.621537  306747 cri.go:89] found id: ""
	I1017 19:28:51.621562  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.621571  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:51.621577  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:51.621634  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:51.648966  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.648994  306747 cri.go:89] found id: ""
	I1017 19:28:51.649002  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:51.649064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.652867  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:51.652934  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:51.685921  306747 cri.go:89] found id: ""
	I1017 19:28:51.685944  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.685953  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:51.685962  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:51.685973  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:51.759988  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:51.760023  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:51.846069  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:51.835717    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.836264    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.837776    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.840665    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.841647    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:51.835717    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.836264    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.837776    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.840665    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.841647    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:51.846090  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:51.846105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.875253  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:51.875281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:51.929449  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:51.929478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:52.036309  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:52.036348  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:52.054743  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:52.054772  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:52.088833  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:52.088860  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:52.157298  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:52.157332  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:52.199361  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:52.199392  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:52.268239  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:52.268286  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:54.799369  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:54.809961  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:54.810031  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:54.836137  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:54.836157  306747 cri.go:89] found id: ""
	I1017 19:28:54.836167  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:54.836220  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.839841  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:54.839912  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:54.873358  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:54.873379  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:54.873383  306747 cri.go:89] found id: ""
	I1017 19:28:54.873391  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:54.873445  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.877284  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.881090  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:54.881164  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:54.908431  306747 cri.go:89] found id: ""
	I1017 19:28:54.908456  306747 logs.go:282] 0 containers: []
	W1017 19:28:54.908465  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:54.908471  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:54.908607  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:54.935825  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:54.935845  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:54.935850  306747 cri.go:89] found id: ""
	I1017 19:28:54.935857  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:54.935913  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.939621  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.943502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:54.943577  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:54.973718  306747 cri.go:89] found id: ""
	I1017 19:28:54.973742  306747 logs.go:282] 0 containers: []
	W1017 19:28:54.973751  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:54.973757  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:54.973818  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:55.004781  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:55.004802  306747 cri.go:89] found id: ""
	I1017 19:28:55.004818  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:55.004885  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:55.015050  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:55.015136  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:55.043899  306747 cri.go:89] found id: ""
	I1017 19:28:55.043966  306747 logs.go:282] 0 containers: []
	W1017 19:28:55.043988  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:55.044013  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:55.044056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:55.097224  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:55.097263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:55.126143  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:55.126175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:55.170272  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:55.170302  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:55.190816  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:55.190846  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:55.229778  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:55.229815  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:55.296882  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:55.296954  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:55.322920  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:55.322960  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:55.398513  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:55.398549  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:55.499678  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:55.499714  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:55.563984  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:55.555178    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.556013    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.557806    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.558580    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.560270    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:55.555178    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.556013    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.557806    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.558580    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.560270    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:55.564010  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:55.564024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.090313  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:58.101520  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:58.101590  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:58.135133  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.135155  306747 cri.go:89] found id: ""
	I1017 19:28:58.135165  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:58.135217  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.139309  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:58.139381  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:58.166722  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:58.166743  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:58.166749  306747 cri.go:89] found id: ""
	I1017 19:28:58.166757  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:58.166829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.170644  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.174541  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:58.174614  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:58.200707  306747 cri.go:89] found id: ""
	I1017 19:28:58.200733  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.200741  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:58.200748  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:58.200802  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:58.227069  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:58.227090  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:58.227095  306747 cri.go:89] found id: ""
	I1017 19:28:58.227102  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:58.227153  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.230793  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.234187  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:58.234268  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:58.260228  306747 cri.go:89] found id: ""
	I1017 19:28:58.260255  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.260264  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:58.260271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:58.260330  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:58.287560  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:58.287582  306747 cri.go:89] found id: ""
	I1017 19:28:58.287590  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:58.287642  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.291431  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:58.291498  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:58.319091  306747 cri.go:89] found id: ""
	I1017 19:28:58.319116  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.319125  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:58.319133  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:58.319144  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:58.357128  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:58.357156  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:58.457940  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:58.457987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:58.477285  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:58.477363  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:58.553846  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:58.545334    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.546110    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.547791    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.548153    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.549602    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:58.545334    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.546110    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.547791    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.548153    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.549602    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:58.553942  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:58.553987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.588733  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:58.588806  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:58.615167  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:58.615234  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:58.668448  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:58.668480  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:58.701507  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:58.701539  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:58.772475  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:58.772512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:58.800891  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:58.800921  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:01.380664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:01.397862  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:01.397929  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:01.438317  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:01.438341  306747 cri.go:89] found id: ""
	I1017 19:29:01.438349  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:01.438408  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.448585  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:01.448665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:01.480947  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:01.480971  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:01.480978  306747 cri.go:89] found id: ""
	I1017 19:29:01.480985  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:01.481040  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.488101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.493426  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:01.493541  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:01.529725  306747 cri.go:89] found id: ""
	I1017 19:29:01.529759  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.529767  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:01.529803  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:01.529888  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:01.570078  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:01.570130  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:01.570162  306747 cri.go:89] found id: ""
	I1017 19:29:01.570347  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:01.570572  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.580262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.584761  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:01.584865  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:01.619278  306747 cri.go:89] found id: ""
	I1017 19:29:01.619316  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.619326  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:01.619460  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:01.619709  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:01.668374  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:01.668398  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:01.668404  306747 cri.go:89] found id: ""
	I1017 19:29:01.668411  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:01.668500  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.672629  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.676472  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:01.676559  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:01.718877  306747 cri.go:89] found id: ""
	I1017 19:29:01.718901  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.718911  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:01.718979  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:01.719003  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:01.786370  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:01.786448  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:01.835925  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:01.836009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:01.936969  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:01.937000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:01.985828  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:01.985857  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:02.036057  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:02.036090  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:02.088571  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:02.088600  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:02.183054  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:02.174539    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.175524    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177270    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177576    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.179060    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:02.174539    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.175524    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177270    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177576    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.179060    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:02.183078  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:02.183094  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:02.214988  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:02.215019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:02.246207  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:02.246238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:02.338642  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:02.338682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:02.473356  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:02.473435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:04.994292  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:05.005817  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:05.005900  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:05.038175  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:05.038208  306747 cri.go:89] found id: ""
	I1017 19:29:05.038217  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:05.038276  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.042122  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:05.042193  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:05.072245  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:05.072271  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:05.072277  306747 cri.go:89] found id: ""
	I1017 19:29:05.072290  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:05.072369  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.085415  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.089790  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:05.089901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:05.126026  306747 cri.go:89] found id: ""
	I1017 19:29:05.126051  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.126059  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:05.126065  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:05.126129  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:05.157653  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:05.157689  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:05.157694  306747 cri.go:89] found id: ""
	I1017 19:29:05.157708  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:05.157780  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.162134  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.166047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:05.166134  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:05.201222  306747 cri.go:89] found id: ""
	I1017 19:29:05.201247  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.201266  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:05.201291  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:05.201364  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:05.228323  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:05.228343  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:05.228348  306747 cri.go:89] found id: ""
	I1017 19:29:05.228355  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:05.228413  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.232758  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.236321  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:05.236407  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:05.264094  306747 cri.go:89] found id: ""
	I1017 19:29:05.264119  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.264128  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:05.264137  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:05.264150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:05.289719  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:05.289749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:05.341596  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:05.341632  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:05.385650  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:05.385681  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:05.455993  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:05.456032  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:05.482902  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:05.482967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:05.561357  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:05.561393  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:05.662914  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:05.662948  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:05.681986  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:05.682019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:05.709932  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:05.709959  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:05.745521  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:05.745548  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:05.780007  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:05.780039  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:05.861169  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:05.844357    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.845194    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.846708    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.847144    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.849138    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:05.844357    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.845194    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.846708    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.847144    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.849138    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:08.361828  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:08.372509  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:08.372609  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:08.398614  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:08.398638  306747 cri.go:89] found id: ""
	I1017 19:29:08.398646  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:08.398707  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.402221  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:08.402294  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:08.426256  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:08.426278  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:08.426284  306747 cri.go:89] found id: ""
	I1017 19:29:08.426291  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:08.426341  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.429916  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.433518  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:08.433587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:08.460461  306747 cri.go:89] found id: ""
	I1017 19:29:08.460487  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.460495  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:08.460502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:08.460591  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:08.488509  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:08.488562  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:08.488568  306747 cri.go:89] found id: ""
	I1017 19:29:08.488576  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:08.488628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.492158  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.495581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:08.495647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:08.524899  306747 cri.go:89] found id: ""
	I1017 19:29:08.524920  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.524928  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:08.524934  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:08.524997  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:08.552958  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:08.552979  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:08.552984  306747 cri.go:89] found id: ""
	I1017 19:29:08.552991  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:08.553045  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.557091  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.560618  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:08.560683  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:08.587418  306747 cri.go:89] found id: ""
	I1017 19:29:08.587495  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.587517  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:08.587557  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:08.587586  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:08.617740  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:08.617768  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:08.691709  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:08.691747  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:08.710175  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:08.710209  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:08.777270  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:08.777305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:08.810729  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:08.810754  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:08.861497  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:08.861524  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:08.964232  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:08.964270  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:09.042894  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:09.034262    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.034773    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.036444    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.037159    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.038877    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:09.034262    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.034773    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.036444    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.037159    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.038877    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:09.042916  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:09.042941  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:09.067822  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:09.067849  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:09.107723  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:09.107755  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:09.186115  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:09.186151  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:11.716134  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:11.726531  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:11.726597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:11.752711  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:11.752733  306747 cri.go:89] found id: ""
	I1017 19:29:11.752741  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:11.752795  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.756278  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:11.756366  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:11.786396  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:11.786424  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:11.786430  306747 cri.go:89] found id: ""
	I1017 19:29:11.786439  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:11.786523  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.790327  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.794284  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:11.794350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:11.826413  306747 cri.go:89] found id: ""
	I1017 19:29:11.826437  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.826446  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:11.826452  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:11.826507  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:11.861782  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:11.861855  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:11.861875  306747 cri.go:89] found id: ""
	I1017 19:29:11.861900  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:11.861986  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.866376  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.870040  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:11.870106  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:11.902703  306747 cri.go:89] found id: ""
	I1017 19:29:11.902725  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.902739  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:11.902745  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:11.902803  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:11.932072  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:11.932141  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:11.932161  306747 cri.go:89] found id: ""
	I1017 19:29:11.932186  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:11.932273  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.935981  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.939489  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:11.939560  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:11.975511  306747 cri.go:89] found id: ""
	I1017 19:29:11.975535  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.975544  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:11.975553  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:11.975565  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:12.003072  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:12.003107  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:12.038364  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:12.038400  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:12.116412  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:12.116450  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:12.147738  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:12.147766  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:12.245018  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:12.245053  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:12.262566  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:12.262641  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:12.312750  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:12.312785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:12.349963  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:12.349991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:12.419426  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:12.411356    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.411861    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.413495    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.414181    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.415507    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:12.411356    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.411861    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.413495    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.414181    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.415507    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:12.419456  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:12.419472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:12.444065  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:12.444093  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:12.511165  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:12.511200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.042908  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:15.054321  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:15.054394  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:15.089860  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:15.089886  306747 cri.go:89] found id: ""
	I1017 19:29:15.089895  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:15.089951  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.093678  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:15.093788  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:15.121746  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:15.121771  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:15.121776  306747 cri.go:89] found id: ""
	I1017 19:29:15.121784  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:15.121839  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.125790  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.129470  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:15.129544  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:15.156564  306747 cri.go:89] found id: ""
	I1017 19:29:15.156591  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.156600  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:15.156606  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:15.156665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:15.189983  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:15.190010  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.190015  306747 cri.go:89] found id: ""
	I1017 19:29:15.190023  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:15.190113  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.194081  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.197983  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:15.198087  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:15.224673  306747 cri.go:89] found id: ""
	I1017 19:29:15.224701  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.224710  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:15.224716  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:15.224776  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:15.250249  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:15.250272  306747 cri.go:89] found id: ""
	I1017 19:29:15.250280  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:15.250336  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.254014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:15.254080  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:15.281235  306747 cri.go:89] found id: ""
	I1017 19:29:15.281313  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.281337  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:15.281363  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:15.281395  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:15.385553  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:15.385599  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:15.411962  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:15.411991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:15.455045  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:15.455073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:15.527131  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:15.527170  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:15.554497  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:15.554527  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:15.587137  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:15.587164  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:15.604763  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:15.604794  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:15.679834  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:15.670121    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.670686    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672157    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672558    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.674247    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:15.670121    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.670686    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672157    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672558    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.674247    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:15.679857  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:15.679870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:15.734902  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:15.734947  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.764734  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:15.764760  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:18.342635  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:18.353361  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:18.353435  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:18.380287  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:18.380311  306747 cri.go:89] found id: ""
	I1017 19:29:18.380319  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:18.380371  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.384298  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:18.384372  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:18.410566  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:18.410585  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:18.410590  306747 cri.go:89] found id: ""
	I1017 19:29:18.410597  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:18.410651  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.414392  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.417897  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:18.417969  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:18.447960  306747 cri.go:89] found id: ""
	I1017 19:29:18.447984  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.447992  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:18.447999  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:18.448054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:18.474020  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:18.474043  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:18.474049  306747 cri.go:89] found id: ""
	I1017 19:29:18.474059  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:18.474117  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.477723  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.481031  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:18.481111  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:18.508003  306747 cri.go:89] found id: ""
	I1017 19:29:18.508026  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.508034  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:18.508040  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:18.508123  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:18.535988  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:18.536017  306747 cri.go:89] found id: ""
	I1017 19:29:18.536026  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:18.536114  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.539822  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:18.539919  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:18.565247  306747 cri.go:89] found id: ""
	I1017 19:29:18.565271  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.565279  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:18.565287  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:18.565340  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:18.590409  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:18.590435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:18.664546  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:18.664583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:18.720073  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:18.720102  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:18.818026  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:18.818065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:18.838304  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:18.838335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:18.923376  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:18.914478    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.915271    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.916962    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.917666    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.919294    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:18.914478    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.915271    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.916962    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.917666    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.919294    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:18.923400  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:18.923413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:18.958683  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:18.958723  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:18.993098  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:18.993125  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:19.020011  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:19.020054  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:19.072525  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:19.072558  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:21.648626  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:21.658854  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:21.658923  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:21.686357  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:21.686380  306747 cri.go:89] found id: ""
	I1017 19:29:21.686388  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:21.686440  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.690383  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:21.690455  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:21.716829  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:21.716849  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:21.716854  306747 cri.go:89] found id: ""
	I1017 19:29:21.716861  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:21.716918  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.720495  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.723948  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:21.724016  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:21.751438  306747 cri.go:89] found id: ""
	I1017 19:29:21.751462  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.751471  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:21.751478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:21.751540  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:21.777499  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:21.777526  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:21.777531  306747 cri.go:89] found id: ""
	I1017 19:29:21.777539  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:21.777597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.781539  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.785454  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:21.785568  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:21.816183  306747 cri.go:89] found id: ""
	I1017 19:29:21.816248  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.816270  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:21.816292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:21.816377  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:21.854603  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:21.854670  306747 cri.go:89] found id: ""
	I1017 19:29:21.854695  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:21.854779  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.860948  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:21.861028  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:21.899847  306747 cri.go:89] found id: ""
	I1017 19:29:21.899871  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.899879  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:21.899887  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:21.899899  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:21.958460  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:21.958497  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:22.040921  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:22.040958  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:22.070331  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:22.070410  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:22.149286  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:22.149326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:22.180733  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:22.180761  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:22.199492  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:22.199531  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:22.272753  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:22.265010    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.265612    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267150    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267571    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.269051    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:22.265010    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.265612    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267150    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267571    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.269051    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:22.272779  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:22.272792  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:22.299733  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:22.299761  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:22.342105  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:22.342137  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:22.369741  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:22.369780  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:24.966101  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:24.976635  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:24.976715  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:25.022230  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:25.022256  306747 cri.go:89] found id: ""
	I1017 19:29:25.022267  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:25.022330  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.026476  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:25.026548  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:25.056264  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:25.056282  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:25.056287  306747 cri.go:89] found id: ""
	I1017 19:29:25.056295  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:25.056345  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.061372  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.064965  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:25.065034  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:25.104703  306747 cri.go:89] found id: ""
	I1017 19:29:25.104725  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.104734  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:25.104739  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:25.104799  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:25.137104  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:25.137128  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:25.137134  306747 cri.go:89] found id: ""
	I1017 19:29:25.137142  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:25.137197  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.141057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.144695  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:25.144771  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:25.171838  306747 cri.go:89] found id: ""
	I1017 19:29:25.171861  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.171870  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:25.171876  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:25.171935  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:25.204227  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:25.204251  306747 cri.go:89] found id: ""
	I1017 19:29:25.204259  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:25.204312  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.208502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:25.208632  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:25.234929  306747 cri.go:89] found id: ""
	I1017 19:29:25.235003  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.235020  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:25.235030  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:25.235043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:25.272163  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:25.272192  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:25.370863  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:25.370900  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:25.411966  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:25.412009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:25.479240  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:25.479276  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:25.506577  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:25.506606  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:25.580671  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:25.580706  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:25.614033  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:25.614061  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:25.631893  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:25.631922  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:25.703391  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:25.694870    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.695646    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697219    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697740    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.699431    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:25.694870    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.695646    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697219    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697740    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.699431    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:25.703420  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:25.703449  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:25.729186  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:25.729213  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.281561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:28.292670  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:28.292764  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:28.321689  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:28.321709  306747 cri.go:89] found id: ""
	I1017 19:29:28.321718  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:28.321791  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.325401  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:28.325491  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:28.353611  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.353636  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:28.353642  306747 cri.go:89] found id: ""
	I1017 19:29:28.353649  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:28.353708  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.357789  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.361132  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:28.361209  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:28.388364  306747 cri.go:89] found id: ""
	I1017 19:29:28.388392  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.388401  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:28.388408  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:28.388471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:28.414080  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:28.414105  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:28.414111  306747 cri.go:89] found id: ""
	I1017 19:29:28.414119  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:28.414176  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.417894  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.421494  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:28.421617  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:28.448583  306747 cri.go:89] found id: ""
	I1017 19:29:28.448611  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.448620  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:28.448626  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:28.448683  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:28.481175  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:28.481198  306747 cri.go:89] found id: ""
	I1017 19:29:28.481208  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:28.481262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.485099  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:28.485212  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:28.511543  306747 cri.go:89] found id: ""
	I1017 19:29:28.511569  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.511577  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:28.511586  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:28.511617  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:28.606473  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:28.606511  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:28.626545  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:28.626577  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:28.697168  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:28.689422    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.690138    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.691704    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.692016    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.693514    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:28.689422    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.690138    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.691704    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.692016    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.693514    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:28.697191  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:28.697204  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.750046  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:28.750080  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:28.818139  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:28.818172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:28.847832  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:28.847916  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:28.928453  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:28.928489  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:28.959160  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:28.959188  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:28.986346  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:28.986374  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:29.037329  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:29.037364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:31.569631  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:31.580386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:31.580488  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:31.606748  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:31.606776  306747 cri.go:89] found id: ""
	I1017 19:29:31.606786  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:31.606861  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.610709  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:31.610808  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:31.637721  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:31.637742  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:31.637747  306747 cri.go:89] found id: ""
	I1017 19:29:31.637754  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:31.637831  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.641550  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.644918  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:31.644994  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:31.671222  306747 cri.go:89] found id: ""
	I1017 19:29:31.671248  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.671257  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:31.671263  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:31.671320  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:31.698318  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:31.698341  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:31.698347  306747 cri.go:89] found id: ""
	I1017 19:29:31.698354  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:31.698409  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.702033  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.705305  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:31.705406  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:31.733910  306747 cri.go:89] found id: ""
	I1017 19:29:31.733940  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.733949  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:31.733956  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:31.734012  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:31.759712  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:31.759743  306747 cri.go:89] found id: ""
	I1017 19:29:31.759752  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:31.759802  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.763496  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:31.763571  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:31.789631  306747 cri.go:89] found id: ""
	I1017 19:29:31.789656  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.789665  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:31.789684  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:31.789701  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:31.907913  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:31.907961  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:31.927231  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:31.927316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:32.018355  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:32.018394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:32.062156  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:32.062194  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:32.153927  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:32.153962  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:32.187982  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:32.188010  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:32.258773  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:32.251239    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.251763    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253326    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253710    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.255187    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:32.251239    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.251763    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253326    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253710    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.255187    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:32.258796  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:32.258835  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:32.290660  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:32.290689  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:32.368997  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:32.369029  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:32.400957  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:32.400988  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:34.933742  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:34.945067  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:34.945160  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:34.975919  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:34.975944  306747 cri.go:89] found id: ""
	I1017 19:29:34.975952  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:34.976011  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:34.979876  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:34.979963  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:35.007426  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:35.007451  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:35.007456  306747 cri.go:89] found id: ""
	I1017 19:29:35.007464  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:35.007526  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.013588  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.018178  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:35.018277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:35.048204  306747 cri.go:89] found id: ""
	I1017 19:29:35.048239  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.048248  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:35.048255  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:35.048315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:35.083329  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:35.083352  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:35.083358  306747 cri.go:89] found id: ""
	I1017 19:29:35.083366  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:35.083430  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.088406  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.094362  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:35.094435  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:35.125078  306747 cri.go:89] found id: ""
	I1017 19:29:35.125160  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.125185  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:35.125198  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:35.125277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:35.153519  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:35.153543  306747 cri.go:89] found id: ""
	I1017 19:29:35.153552  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:35.153605  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.157388  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:35.157485  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:35.189018  306747 cri.go:89] found id: ""
	I1017 19:29:35.189086  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.189113  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:35.189142  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:35.189185  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:35.290719  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:35.290763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:35.310771  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:35.310803  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:35.386443  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:35.376912    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.377784    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379400    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379730    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.381228    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:35.376912    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.377784    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379400    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379730    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.381228    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:35.386470  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:35.386484  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:35.442234  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:35.442274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:35.480866  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:35.480896  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:35.549288  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:35.549326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:35.576073  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:35.576102  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:35.611273  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:35.611308  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:35.639731  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:35.639763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:35.671118  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:35.671148  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:38.244668  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:38.257170  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:38.257244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:38.283218  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:38.283238  306747 cri.go:89] found id: ""
	I1017 19:29:38.283247  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:38.283305  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.287299  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:38.287365  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:38.314528  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:38.314550  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:38.314555  306747 cri.go:89] found id: ""
	I1017 19:29:38.314563  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:38.314614  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.318298  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.321948  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:38.322042  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:38.349464  306747 cri.go:89] found id: ""
	I1017 19:29:38.349503  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.349516  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:38.349538  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:38.349626  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:38.379503  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:38.379565  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:38.379583  306747 cri.go:89] found id: ""
	I1017 19:29:38.379608  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:38.379675  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.383360  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.387192  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:38.387298  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:38.421165  306747 cri.go:89] found id: ""
	I1017 19:29:38.421190  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.421199  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:38.421205  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:38.421293  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:38.449443  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:38.449509  306747 cri.go:89] found id: ""
	I1017 19:29:38.449530  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:38.449608  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.453406  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:38.453530  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:38.480577  306747 cri.go:89] found id: ""
	I1017 19:29:38.480640  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.480662  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:38.480687  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:38.480712  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:38.558339  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:38.558375  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:38.588992  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:38.589018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:38.688443  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:38.688478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:38.705940  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:38.706012  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:38.738810  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:38.738836  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:38.765665  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:38.765693  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:38.841021  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:38.831886    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.832670    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.834636    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.835450    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.837074    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:38.831886    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.832670    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.834636    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.835450    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.837074    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:38.841095  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:38.841115  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:38.870763  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:38.870791  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:38.943129  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:38.943162  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:38.984504  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:38.984583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.577128  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:41.588152  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:41.588230  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:41.616214  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:41.616251  306747 cri.go:89] found id: ""
	I1017 19:29:41.616261  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:41.616333  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.620228  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:41.620301  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:41.647140  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:41.647166  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:41.647172  306747 cri.go:89] found id: ""
	I1017 19:29:41.647180  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:41.647241  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.650918  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.654626  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:41.654701  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:41.680974  306747 cri.go:89] found id: ""
	I1017 19:29:41.680999  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.681008  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:41.681014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:41.681071  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:41.707036  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.707071  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:41.707076  306747 cri.go:89] found id: ""
	I1017 19:29:41.707084  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:41.707137  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.710947  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.714920  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:41.715001  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:41.741927  306747 cri.go:89] found id: ""
	I1017 19:29:41.741952  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.741962  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:41.741968  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:41.742026  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:41.766904  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:41.766928  306747 cri.go:89] found id: ""
	I1017 19:29:41.766936  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:41.766989  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.770640  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:41.770722  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:41.797979  306747 cri.go:89] found id: ""
	I1017 19:29:41.798007  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.798017  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:41.798026  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:41.798038  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:41.815570  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:41.815602  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:41.872205  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:41.872246  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:41.910906  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:41.910942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.996670  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:41.996709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:42.033766  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:42.033804  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:42.143006  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:42.143055  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:42.258670  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:42.246629    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.247190    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.249238    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.250318    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.251136    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:42.246629    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.247190    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.249238    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.250318    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.251136    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:42.258694  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:42.258709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:42.294390  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:42.294422  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:42.328168  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:42.328202  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:42.357875  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:42.357932  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:44.934951  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:44.945451  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:44.945522  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:44.979178  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:44.979201  306747 cri.go:89] found id: ""
	I1017 19:29:44.979209  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:44.979263  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:44.983046  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:44.983126  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:45.035414  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:45.035438  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:45.035443  306747 cri.go:89] found id: ""
	I1017 19:29:45.035451  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:45.035519  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.048433  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.053636  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:45.053716  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:45.120373  306747 cri.go:89] found id: ""
	I1017 19:29:45.120397  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.120406  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:45.120414  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:45.120482  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:45.167585  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:45.167667  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:45.167692  306747 cri.go:89] found id: ""
	I1017 19:29:45.167719  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:45.167819  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.173369  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.178434  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:45.178531  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:45.220087  306747 cri.go:89] found id: ""
	I1017 19:29:45.220115  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.220125  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:45.220132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:45.220222  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:45.275433  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:45.275475  306747 cri.go:89] found id: ""
	I1017 19:29:45.275484  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:45.275559  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.281184  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:45.281323  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:45.323004  306747 cri.go:89] found id: ""
	I1017 19:29:45.323106  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.323137  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:45.323188  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:45.323238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:45.371491  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:45.371598  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:45.464170  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:45.455221    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.456745    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.457962    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.458630    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.460252    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:45.455221    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.456745    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.457962    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.458630    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.460252    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:45.464194  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:45.464206  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:45.499416  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:45.499445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:45.536994  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:45.537028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:45.615136  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:45.615172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:45.720244  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:45.720281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:45.778577  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:45.778610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:45.859732  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:45.859813  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:45.896812  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:45.896889  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:45.929734  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:45.929763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:48.461978  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:48.472688  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:48.472759  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:48.499995  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:48.500019  306747 cri.go:89] found id: ""
	I1017 19:29:48.500028  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:48.500084  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.504256  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:48.504330  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:48.533568  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:48.533627  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:48.533647  306747 cri.go:89] found id: ""
	I1017 19:29:48.533662  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:48.533722  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.538269  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.542307  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:48.542388  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:48.572286  306747 cri.go:89] found id: ""
	I1017 19:29:48.572355  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.572379  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:48.572405  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:48.572499  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:48.599218  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:48.599246  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:48.599251  306747 cri.go:89] found id: ""
	I1017 19:29:48.599259  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:48.599310  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.603036  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.606361  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:48.606471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:48.631930  306747 cri.go:89] found id: ""
	I1017 19:29:48.631966  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.631975  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:48.631982  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:48.632052  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:48.658684  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:48.658711  306747 cri.go:89] found id: ""
	I1017 19:29:48.658720  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:48.658773  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.662512  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:48.662586  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:48.688997  306747 cri.go:89] found id: ""
	I1017 19:29:48.689022  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.689031  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:48.689041  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:48.689052  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:48.789868  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:48.789919  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:48.860960  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:48.850451    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.851072    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852664    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852967    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.854822    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:48.850451    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.851072    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852664    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852967    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.854822    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:48.860984  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:48.861000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:48.933293  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:48.933334  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:48.961662  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:48.961692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:48.998503  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:48.998533  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:49.030219  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:49.030292  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:49.048915  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:49.048949  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:49.075217  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:49.075256  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:49.132824  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:49.132859  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:49.166233  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:49.166269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:51.747014  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:51.757581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:51.757655  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:51.783413  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:51.783436  306747 cri.go:89] found id: ""
	I1017 19:29:51.783444  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:51.783499  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.787489  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:51.787553  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:51.815381  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:51.815404  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:51.815408  306747 cri.go:89] found id: ""
	I1017 19:29:51.815415  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:51.815467  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.819345  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.822754  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:51.822830  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:51.863882  306747 cri.go:89] found id: ""
	I1017 19:29:51.863922  306747 logs.go:282] 0 containers: []
	W1017 19:29:51.863931  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:51.863937  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:51.863997  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:51.896342  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:51.896414  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:51.896433  306747 cri.go:89] found id: ""
	I1017 19:29:51.896457  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:51.896574  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.900688  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.905025  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:51.905156  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:51.950302  306747 cri.go:89] found id: ""
	I1017 19:29:51.950325  306747 logs.go:282] 0 containers: []
	W1017 19:29:51.950333  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:51.950339  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:51.950408  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:51.984143  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:51.984164  306747 cri.go:89] found id: ""
	I1017 19:29:51.984172  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:51.984225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.988312  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:51.988387  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:52.024692  306747 cri.go:89] found id: ""
	I1017 19:29:52.024720  306747 logs.go:282] 0 containers: []
	W1017 19:29:52.024729  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:52.024738  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:52.024750  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:52.043591  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:52.043708  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:52.083962  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:52.084045  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:52.156858  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:52.149368    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.149750    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151218    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151521    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.152949    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:52.149368    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.149750    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151218    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151521    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.152949    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:52.156879  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:52.156894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:52.183367  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:52.183396  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:52.244364  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:52.244445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:52.277850  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:52.277883  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:52.363433  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:52.363473  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:52.392573  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:52.392602  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:52.421470  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:52.421499  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:52.502975  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:52.503014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:55.106386  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:55.118281  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:55.118357  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:55.147588  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:55.147612  306747 cri.go:89] found id: ""
	I1017 19:29:55.147625  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:55.147679  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.151460  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:55.151530  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:55.179417  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:55.179441  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:55.179447  306747 cri.go:89] found id: ""
	I1017 19:29:55.179455  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:55.179512  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.184062  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.187762  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:55.187876  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:55.214159  306747 cri.go:89] found id: ""
	I1017 19:29:55.214187  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.214196  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:55.214203  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:55.214268  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:55.244963  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:55.244987  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:55.244992  306747 cri.go:89] found id: ""
	I1017 19:29:55.244999  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:55.245052  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.250157  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.256061  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:55.256151  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:55.287091  306747 cri.go:89] found id: ""
	I1017 19:29:55.287114  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.287122  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:55.287128  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:55.287192  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:55.316175  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:55.316245  306747 cri.go:89] found id: ""
	I1017 19:29:55.316268  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:55.316359  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.321292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:55.321374  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:55.348125  306747 cri.go:89] found id: ""
	I1017 19:29:55.348151  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.348160  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:55.348169  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:55.348181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:55.380783  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:55.380812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:55.414351  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:55.414386  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:55.484774  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:55.475182    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.476192    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478010    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478543    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.480183    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:55.475182    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.476192    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478010    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478543    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.480183    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:55.484796  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:55.484809  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:55.556984  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:55.557018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:55.625177  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:55.625251  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:55.655370  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:55.655398  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:55.680829  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:55.680860  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:55.763300  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:55.763331  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:55.803920  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:55.803954  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:55.900738  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:55.900773  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:58.422801  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:58.433443  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:58.433516  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:58.464116  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:58.464136  306747 cri.go:89] found id: ""
	I1017 19:29:58.464144  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:58.464212  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.468047  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:58.468169  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:58.494945  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:58.494979  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:58.494985  306747 cri.go:89] found id: ""
	I1017 19:29:58.494993  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:58.495058  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.498896  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.502320  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:58.502386  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:58.531527  306747 cri.go:89] found id: ""
	I1017 19:29:58.531550  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.531558  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:58.531564  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:58.531623  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:58.558316  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:58.558337  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:58.558342  306747 cri.go:89] found id: ""
	I1017 19:29:58.558350  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:58.558403  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.562311  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.565856  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:58.565960  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:58.591130  306747 cri.go:89] found id: ""
	I1017 19:29:58.591156  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.591164  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:58.591173  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:58.591229  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:58.618142  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:58.618221  306747 cri.go:89] found id: ""
	I1017 19:29:58.618237  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:58.618297  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.621817  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:58.621888  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:58.651258  306747 cri.go:89] found id: ""
	I1017 19:29:58.651284  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.651293  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:58.651302  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:58.651315  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:58.720909  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:58.720942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:58.748703  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:58.748729  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:58.776433  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:58.776463  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:58.851007  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:58.851041  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:58.884351  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:58.884382  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:58.957941  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:58.949361    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.950154    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.951742    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.952330    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.954025    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:58.949361    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.950154    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.951742    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.952330    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.954025    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:58.957961  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:58.957974  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:58.987459  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:58.987531  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:59.026978  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:59.027008  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:59.128822  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:59.128858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:59.146047  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:59.146079  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:01.705070  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:01.718647  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:01.718748  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:01.753347  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:01.753387  306747 cri.go:89] found id: ""
	I1017 19:30:01.753395  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:01.753457  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.757741  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:01.757850  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:01.786783  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:01.786861  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:01.786873  306747 cri.go:89] found id: ""
	I1017 19:30:01.786882  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:01.787029  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.791549  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.796677  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:01.796752  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:01.826434  306747 cri.go:89] found id: ""
	I1017 19:30:01.826462  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.826472  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:01.826478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:01.826543  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:01.863544  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:01.863569  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:01.863574  306747 cri.go:89] found id: ""
	I1017 19:30:01.863582  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:01.863639  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.867992  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.872125  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:01.872206  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:01.908249  306747 cri.go:89] found id: ""
	I1017 19:30:01.908276  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.908285  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:01.908292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:01.908354  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:01.936971  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:01.937001  306747 cri.go:89] found id: ""
	I1017 19:30:01.937010  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:01.937105  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.941357  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:01.941426  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:01.982542  306747 cri.go:89] found id: ""
	I1017 19:30:01.982569  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.982578  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:01.982593  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:01.982606  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:02.018942  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:02.018970  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:02.099513  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:02.099556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:02.137502  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:02.137532  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:02.185697  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:02.185738  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:02.288795  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:02.288835  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:02.336210  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:02.336248  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:02.422878  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:02.422917  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:02.453635  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:02.453662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:02.540123  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:02.540164  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:02.558457  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:02.558491  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:02.629161  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:02.619096   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.619981   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.621652   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.622279   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.624619   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:02.619096   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.619981   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.621652   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.622279   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.624619   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:05.130448  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:05.144120  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:05.144214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:05.175291  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:05.175324  306747 cri.go:89] found id: ""
	I1017 19:30:05.175334  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:05.175394  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.179428  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:05.179514  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:05.212486  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:05.212511  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:05.212541  306747 cri.go:89] found id: ""
	I1017 19:30:05.212550  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:05.212606  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.216463  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.220220  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:05.220295  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:05.249597  306747 cri.go:89] found id: ""
	I1017 19:30:05.249624  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.249633  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:05.249640  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:05.249706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:05.276856  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:05.276878  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:05.276883  306747 cri.go:89] found id: ""
	I1017 19:30:05.276890  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:05.276945  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.280586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.284132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:05.284196  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:05.312051  306747 cri.go:89] found id: ""
	I1017 19:30:05.312081  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.312090  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:05.312096  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:05.312154  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:05.339324  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:05.339345  306747 cri.go:89] found id: ""
	I1017 19:30:05.339353  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:05.339406  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.343274  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:05.343351  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:05.371042  306747 cri.go:89] found id: ""
	I1017 19:30:05.371067  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.371076  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:05.371086  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:05.371103  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:05.395923  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:05.395957  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:05.453746  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:05.453785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:05.495400  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:05.495436  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:05.522354  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:05.522384  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:05.603168  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:05.603203  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:05.635130  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:05.635158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:05.730159  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:05.730196  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:05.805436  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:05.797321   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.798191   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.799878   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.800180   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.801717   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:05.797321   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.798191   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.799878   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.800180   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.801717   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:05.805458  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:05.805471  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:05.831415  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:05.831453  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:05.915270  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:05.915309  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:08.445553  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:08.457157  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:08.457224  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:08.489306  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:08.489335  306747 cri.go:89] found id: ""
	I1017 19:30:08.489344  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:08.489399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.493424  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:08.493497  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:08.523021  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:08.523056  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:08.523061  306747 cri.go:89] found id: ""
	I1017 19:30:08.523069  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:08.523133  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.527165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.530929  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:08.531043  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:08.560240  306747 cri.go:89] found id: ""
	I1017 19:30:08.560266  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.560275  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:08.560282  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:08.560340  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:08.587950  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:08.587974  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:08.587979  306747 cri.go:89] found id: ""
	I1017 19:30:08.587987  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:08.588048  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.591797  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.595627  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:08.595710  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:08.622023  306747 cri.go:89] found id: ""
	I1017 19:30:08.622048  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.622057  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:08.622064  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:08.622123  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:08.652098  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:08.652194  306747 cri.go:89] found id: ""
	I1017 19:30:08.652232  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:08.652399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.657095  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:08.657180  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:08.687380  306747 cri.go:89] found id: ""
	I1017 19:30:08.687404  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.687412  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:08.687421  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:08.687433  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:08.785046  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:08.785084  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:08.815287  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:08.815318  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:08.880972  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:08.881008  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:08.919918  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:08.919947  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:08.994592  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:08.994632  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:09.029806  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:09.029833  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:09.059196  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:09.059224  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:09.077625  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:09.077658  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:09.155722  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:09.147557   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.148286   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.149973   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.150565   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.152238   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:09.147557   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.148286   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.149973   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.150565   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.152238   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:09.155746  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:09.155759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:09.230856  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:09.230895  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:11.763218  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:11.774210  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:11.774310  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:11.807759  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:11.807778  306747 cri.go:89] found id: ""
	I1017 19:30:11.807786  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:11.807840  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.812129  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:11.812202  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:11.840430  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:11.840451  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:11.840459  306747 cri.go:89] found id: ""
	I1017 19:30:11.840467  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:11.840562  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.844491  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.848972  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:11.849065  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:11.876962  306747 cri.go:89] found id: ""
	I1017 19:30:11.876986  306747 logs.go:282] 0 containers: []
	W1017 19:30:11.876994  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:11.877000  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:11.877060  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:11.907338  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:11.907402  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:11.907421  306747 cri.go:89] found id: ""
	I1017 19:30:11.907446  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:11.907534  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.911700  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.915708  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:11.915823  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:11.945931  306747 cri.go:89] found id: ""
	I1017 19:30:11.945968  306747 logs.go:282] 0 containers: []
	W1017 19:30:11.945976  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:11.945983  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:11.946041  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:11.973489  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:11.973509  306747 cri.go:89] found id: ""
	I1017 19:30:11.973517  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:11.973582  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.979325  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:11.979401  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:12.006387  306747 cri.go:89] found id: ""
	I1017 19:30:12.006415  306747 logs.go:282] 0 containers: []
	W1017 19:30:12.006425  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:12.006437  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:12.006452  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:12.112142  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:12.112180  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:12.130633  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:12.130662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:12.219234  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:12.204079   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.204586   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.208545   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.212324   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.214784   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:12.204079   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.204586   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.208545   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.212324   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.214784   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:12.219259  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:12.219274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:12.248889  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:12.248918  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:12.284961  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:12.284995  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:12.360893  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:12.360930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:12.394406  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:12.394433  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:12.420215  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:12.420245  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:12.477947  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:12.477980  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:12.559952  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:12.559989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:15.098061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:15.110601  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:15.110673  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:15.142831  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:15.142854  306747 cri.go:89] found id: ""
	I1017 19:30:15.142863  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:15.142922  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.147216  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:15.147336  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:15.177462  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:15.177487  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:15.177492  306747 cri.go:89] found id: ""
	I1017 19:30:15.177500  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:15.177556  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.182001  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.186668  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:15.186752  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:15.218350  306747 cri.go:89] found id: ""
	I1017 19:30:15.218375  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.218383  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:15.218389  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:15.218449  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:15.247656  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:15.247730  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:15.247750  306747 cri.go:89] found id: ""
	I1017 19:30:15.247774  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:15.247847  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.251499  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.254966  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:15.255039  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:15.282034  306747 cri.go:89] found id: ""
	I1017 19:30:15.282056  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.282065  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:15.282071  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:15.282131  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:15.313582  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:15.313643  306747 cri.go:89] found id: ""
	I1017 19:30:15.313665  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:15.313739  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.317325  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:15.317407  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:15.343894  306747 cri.go:89] found id: ""
	I1017 19:30:15.343921  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.343937  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:15.343947  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:15.343967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:15.416772  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:15.408215   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.409020   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410494   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410798   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.412827   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:15.408215   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.409020   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410494   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410798   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.412827   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:15.416794  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:15.416807  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:15.455991  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:15.456060  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:15.533107  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:15.533144  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:15.605424  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:15.605464  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:15.633544  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:15.633572  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:15.710509  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:15.710545  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:15.744271  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:15.744352  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:15.844584  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:15.844621  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:15.865714  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:15.865745  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:15.910911  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:15.910945  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:18.440664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:18.451576  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:18.451643  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:18.480927  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:18.480948  306747 cri.go:89] found id: ""
	I1017 19:30:18.480956  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:18.481010  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.484797  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:18.484886  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:18.512958  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:18.513034  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:18.513045  306747 cri.go:89] found id: ""
	I1017 19:30:18.513053  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:18.513106  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.516855  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.520298  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:18.520369  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:18.546427  306747 cri.go:89] found id: ""
	I1017 19:30:18.546453  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.546462  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:18.546468  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:18.546532  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:18.573945  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:18.574007  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:18.574021  306747 cri.go:89] found id: ""
	I1017 19:30:18.574030  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:18.574094  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.577681  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.581276  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:18.581357  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:18.607914  306747 cri.go:89] found id: ""
	I1017 19:30:18.607941  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.607950  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:18.607956  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:18.608013  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:18.634762  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:18.634781  306747 cri.go:89] found id: ""
	I1017 19:30:18.634789  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:18.634842  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.638638  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:18.638754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:18.666586  306747 cri.go:89] found id: ""
	I1017 19:30:18.666610  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.666618  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:18.666627  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:18.666639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:18.685607  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:18.685637  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:18.740058  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:18.740088  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:18.816374  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:18.816410  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:18.842654  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:18.842686  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:18.921888  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:18.913390   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.913958   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.915701   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.916258   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.918025   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:18.913390   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.913958   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.915701   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.916258   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.918025   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:18.921914  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:18.921930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:18.948267  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:18.948298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:19.003855  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:19.003894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:19.033396  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:19.033424  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:19.128308  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:19.128353  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:19.162140  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:19.162166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:21.764178  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:21.775522  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:21.775596  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:21.803342  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:21.803367  306747 cri.go:89] found id: ""
	I1017 19:30:21.803377  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:21.803442  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.807522  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:21.807598  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:21.836696  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:21.836720  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:21.836726  306747 cri.go:89] found id: ""
	I1017 19:30:21.836734  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:21.836789  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.840752  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.844455  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:21.844557  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:21.872104  306747 cri.go:89] found id: ""
	I1017 19:30:21.872131  306747 logs.go:282] 0 containers: []
	W1017 19:30:21.872140  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:21.872147  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:21.872210  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:21.908413  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:21.908439  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:21.908448  306747 cri.go:89] found id: ""
	I1017 19:30:21.908455  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:21.908513  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.912640  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.916402  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:21.916476  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:21.950380  306747 cri.go:89] found id: ""
	I1017 19:30:21.950466  306747 logs.go:282] 0 containers: []
	W1017 19:30:21.950498  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:21.950517  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:21.950628  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:21.983152  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:21.983177  306747 cri.go:89] found id: ""
	I1017 19:30:21.983187  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:21.983243  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.986962  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:21.987037  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:22.019909  306747 cri.go:89] found id: ""
	I1017 19:30:22.019935  306747 logs.go:282] 0 containers: []
	W1017 19:30:22.019944  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:22.019953  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:22.019996  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:22.069135  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:22.069175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:22.103886  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:22.103916  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:22.133109  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:22.133136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:22.215579  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:22.215617  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:22.297981  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:22.289181   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.289836   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291072   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291590   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.293032   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:22.289181   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.289836   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291072   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291590   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.293032   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:22.298003  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:22.298017  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:22.373102  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:22.373140  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:22.406083  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:22.406110  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:22.506621  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:22.506659  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:22.526268  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:22.526299  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:22.557755  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:22.557784  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.116647  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:25.128310  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:25.128412  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:25.158258  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:25.158281  306747 cri.go:89] found id: ""
	I1017 19:30:25.158293  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:25.158358  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.162693  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:25.162773  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:25.197276  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.197301  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:25.197307  306747 cri.go:89] found id: ""
	I1017 19:30:25.197315  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:25.197407  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.201342  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.205350  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:25.205422  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:25.233590  306747 cri.go:89] found id: ""
	I1017 19:30:25.233617  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.233627  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:25.233634  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:25.233693  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:25.260459  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:25.260486  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:25.260492  306747 cri.go:89] found id: ""
	I1017 19:30:25.260500  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:25.260582  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.266116  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.269609  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:25.269709  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:25.299945  306747 cri.go:89] found id: ""
	I1017 19:30:25.299970  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.299979  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:25.299986  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:25.300062  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:25.327588  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:25.327611  306747 cri.go:89] found id: ""
	I1017 19:30:25.327619  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:25.327695  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.331614  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:25.331714  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:25.360945  306747 cri.go:89] found id: ""
	I1017 19:30:25.360969  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.360978  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:25.360987  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:25.361018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.419332  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:25.419371  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:25.455422  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:25.455454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:25.533420  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:25.533454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:25.561277  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:25.561303  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:25.589003  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:25.589032  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:25.667191  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:25.667225  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:25.697081  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:25.697108  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:25.796723  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:25.796756  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:25.817825  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:25.817854  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:25.895602  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:25.887039   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.887933   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.889709   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.890373   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.891870   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:25.887039   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.887933   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.889709   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.890373   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.891870   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:25.895626  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:25.895639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:28.421545  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:28.432472  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:28.432573  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:28.461368  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:28.461391  306747 cri.go:89] found id: ""
	I1017 19:30:28.461400  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:28.461454  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.466145  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:28.466221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:28.496790  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:28.496814  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:28.496822  306747 cri.go:89] found id: ""
	I1017 19:30:28.496830  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:28.496886  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.500588  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.504150  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:28.504250  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:28.530114  306747 cri.go:89] found id: ""
	I1017 19:30:28.530141  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.530150  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:28.530157  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:28.530257  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:28.560630  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:28.560660  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:28.560675  306747 cri.go:89] found id: ""
	I1017 19:30:28.560684  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:28.560737  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.564422  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.568093  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:28.568165  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:28.598927  306747 cri.go:89] found id: ""
	I1017 19:30:28.598954  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.598963  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:28.598969  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:28.599075  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:28.625977  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:28.626001  306747 cri.go:89] found id: ""
	I1017 19:30:28.626010  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:28.626090  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.629847  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:28.629929  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:28.656469  306747 cri.go:89] found id: ""
	I1017 19:30:28.656494  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.656503  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:28.656513  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:28.656548  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:28.758826  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:28.758863  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:28.778387  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:28.778416  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:28.845382  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:28.837571   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.838156   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.839753   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.840320   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.841429   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:28.837571   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.838156   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.839753   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.840320   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.841429   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:28.845407  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:28.845420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:28.889092  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:28.889167  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:28.970950  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:28.970986  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:29.003996  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:29.004028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:29.064888  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:29.064926  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:29.105700  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:29.105729  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:29.141040  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:29.141066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:29.224674  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:29.224710  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:31.757505  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:31.767848  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:31.767914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:31.800059  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:31.800082  306747 cri.go:89] found id: ""
	I1017 19:30:31.800093  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:31.800147  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.803723  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:31.803795  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:31.830502  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:31.830525  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:31.830530  306747 cri.go:89] found id: ""
	I1017 19:30:31.830546  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:31.830600  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.834866  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.838218  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:31.838293  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:31.866917  306747 cri.go:89] found id: ""
	I1017 19:30:31.866944  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.866953  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:31.866960  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:31.867015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:31.898652  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:31.898673  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:31.898679  306747 cri.go:89] found id: ""
	I1017 19:30:31.898692  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:31.898745  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.902404  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.905916  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:31.906005  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:31.936988  306747 cri.go:89] found id: ""
	I1017 19:30:31.937055  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.937080  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:31.937103  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:31.937192  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:31.965478  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:31.965506  306747 cri.go:89] found id: ""
	I1017 19:30:31.965515  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:31.965570  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.969541  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:31.969611  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:31.997913  306747 cri.go:89] found id: ""
	I1017 19:30:31.997936  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.997945  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:31.997954  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:31.997967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:32.075635  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:32.076176  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:32.124512  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:32.124607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:32.203895  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:32.203930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:32.237712  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:32.237745  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:32.265784  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:32.265812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:32.296288  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:32.296316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:32.413833  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:32.413869  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:32.431287  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:32.431316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:32.496198  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:32.487969   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.488616   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490480   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490935   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.492578   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:32.487969   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.488616   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490480   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490935   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.492578   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:32.496222  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:32.496238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:32.522527  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:32.522556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.098806  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:35.114025  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:35.114098  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:35.150192  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:35.150215  306747 cri.go:89] found id: ""
	I1017 19:30:35.150224  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:35.150291  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.154431  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:35.154528  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:35.187248  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.187274  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:35.187280  306747 cri.go:89] found id: ""
	I1017 19:30:35.187288  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:35.187342  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.190988  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.194467  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:35.194544  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:35.226183  306747 cri.go:89] found id: ""
	I1017 19:30:35.226209  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.226228  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:35.226277  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:35.226345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:35.254492  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:35.254514  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:35.254532  306747 cri.go:89] found id: ""
	I1017 19:30:35.254542  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:35.254600  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.258515  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.262160  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:35.262245  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:35.290479  306747 cri.go:89] found id: ""
	I1017 19:30:35.290556  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.290573  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:35.290581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:35.290647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:35.320673  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:35.320696  306747 cri.go:89] found id: ""
	I1017 19:30:35.320705  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:35.320760  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.324577  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:35.324650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:35.351615  306747 cri.go:89] found id: ""
	I1017 19:30:35.351643  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.351652  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:35.351662  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:35.351674  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:35.426069  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:35.414413   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.418263   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419343   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419972   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.421885   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:35.414413   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.418263   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419343   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419972   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.421885   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:35.426092  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:35.426105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:35.458415  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:35.458445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.532727  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:35.532763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:35.570789  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:35.570821  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:35.654656  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:35.654691  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:35.682337  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:35.682368  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:35.783217  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:35.783263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:35.809044  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:35.809075  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:35.836181  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:35.836213  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:35.922975  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:35.923013  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:38.460477  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:38.471359  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:38.471462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:38.500899  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:38.500923  306747 cri.go:89] found id: ""
	I1017 19:30:38.500932  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:38.501005  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.505166  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:38.505244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:38.531743  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:38.531766  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:38.531771  306747 cri.go:89] found id: ""
	I1017 19:30:38.531779  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:38.531842  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.535645  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.539501  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:38.539580  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:38.568890  306747 cri.go:89] found id: ""
	I1017 19:30:38.568915  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.568923  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:38.568929  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:38.568989  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:38.594452  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:38.594476  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:38.594482  306747 cri.go:89] found id: ""
	I1017 19:30:38.594490  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:38.594544  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.598456  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.606409  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:38.606483  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:38.632993  306747 cri.go:89] found id: ""
	I1017 19:30:38.633015  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.633024  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:38.633030  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:38.633091  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:38.659776  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:38.659800  306747 cri.go:89] found id: ""
	I1017 19:30:38.659809  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:38.659861  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.663404  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:38.663507  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:38.688978  306747 cri.go:89] found id: ""
	I1017 19:30:38.689003  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.689012  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:38.689021  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:38.689033  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:38.722471  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:38.722497  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:38.800538  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:38.800575  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:38.832423  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:38.832451  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:38.939609  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:38.939648  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:38.959665  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:38.959701  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:39.039314  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:39.030321   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.030924   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.032747   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.033627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.034935   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:39.030321   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.030924   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.032747   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.033627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.034935   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:39.039340  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:39.039355  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:39.113637  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:39.113709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:39.148504  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:39.148662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:39.223019  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:39.223056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:39.253605  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:39.253635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:41.780640  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:41.791876  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:41.791949  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:41.819510  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:41.819583  306747 cri.go:89] found id: ""
	I1017 19:30:41.819606  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:41.819691  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.824390  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:41.824462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:41.856605  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:41.856636  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:41.856642  306747 cri.go:89] found id: ""
	I1017 19:30:41.856649  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:41.856715  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.864466  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.868588  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:41.868666  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:41.903466  306747 cri.go:89] found id: ""
	I1017 19:30:41.903498  306747 logs.go:282] 0 containers: []
	W1017 19:30:41.903507  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:41.903514  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:41.903571  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:41.930657  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:41.930682  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:41.930687  306747 cri.go:89] found id: ""
	I1017 19:30:41.930694  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:41.930749  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.934754  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.938781  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:41.938871  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:41.968280  306747 cri.go:89] found id: ""
	I1017 19:30:41.968306  306747 logs.go:282] 0 containers: []
	W1017 19:30:41.968315  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:41.968322  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:41.968402  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:41.995850  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:41.995931  306747 cri.go:89] found id: ""
	I1017 19:30:41.995955  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:41.996030  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.999630  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:41.999700  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:42.044891  306747 cri.go:89] found id: ""
	I1017 19:30:42.044926  306747 logs.go:282] 0 containers: []
	W1017 19:30:42.044935  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:42.044952  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:42.044971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:42.174128  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:42.174267  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:42.224381  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:42.224413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:42.333478  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:42.333518  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:42.353368  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:42.353403  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:42.391604  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:42.391635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:42.426317  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:42.426347  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:42.503367  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:42.494794   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.495471   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497096   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497695   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.499206   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:42.494794   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.495471   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497096   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497695   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.499206   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:42.503388  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:42.503401  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:42.560324  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:42.560359  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:42.632932  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:42.632968  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:42.665758  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:42.665844  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.196869  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:45.213931  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:45.214024  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:45.259283  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:45.259312  306747 cri.go:89] found id: ""
	I1017 19:30:45.259321  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:45.259390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.265805  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:45.265913  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:45.316071  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:45.316098  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:45.316103  306747 cri.go:89] found id: ""
	I1017 19:30:45.316112  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:45.316178  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.329246  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.342518  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:45.342722  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:45.403649  306747 cri.go:89] found id: ""
	I1017 19:30:45.403681  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.403691  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:45.403700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:45.403771  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:45.436373  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:45.436398  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:45.436404  306747 cri.go:89] found id: ""
	I1017 19:30:45.436412  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:45.436470  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.442171  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.446282  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:45.446378  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:45.480185  306747 cri.go:89] found id: ""
	I1017 19:30:45.480211  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.480269  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:45.480281  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:45.480348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:45.519821  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.519845  306747 cri.go:89] found id: ""
	I1017 19:30:45.519853  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:45.519916  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.523961  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:45.524044  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:45.553268  306747 cri.go:89] found id: ""
	I1017 19:30:45.553295  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.553336  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:45.553353  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:45.553376  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:45.581168  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:45.581199  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:45.659459  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:45.659495  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:45.698325  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:45.698356  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:45.730552  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:45.730578  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.761205  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:45.761233  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:45.859241  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:45.859345  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:45.879219  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:45.879249  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:45.956579  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:45.956613  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:46.038168  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:46.038207  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:46.088885  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:46.088920  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:46.156435  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:46.147068   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.148033   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.149640   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.150155   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.151669   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:46.147068   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.148033   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.149640   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.150155   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.151669   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:48.657371  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:48.668345  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:48.668414  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:48.699974  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:48.699994  306747 cri.go:89] found id: ""
	I1017 19:30:48.700002  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:48.700055  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.703706  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:48.703773  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:48.729231  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:48.729255  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:48.729260  306747 cri.go:89] found id: ""
	I1017 19:30:48.729267  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:48.729347  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.733057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.736560  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:48.736650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:48.769891  306747 cri.go:89] found id: ""
	I1017 19:30:48.769917  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.769925  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:48.769932  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:48.769988  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:48.796614  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:48.796633  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:48.796638  306747 cri.go:89] found id: ""
	I1017 19:30:48.796645  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:48.796697  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.800347  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.803641  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:48.803707  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:48.829352  306747 cri.go:89] found id: ""
	I1017 19:30:48.829375  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.829384  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:48.829390  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:48.829448  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:48.863517  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:48.863542  306747 cri.go:89] found id: ""
	I1017 19:30:48.863551  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:48.863603  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.867339  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:48.867411  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:48.896584  306747 cri.go:89] found id: ""
	I1017 19:30:48.896609  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.896618  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:48.896626  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:48.896639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:48.990111  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:48.990146  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:49.015233  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:49.015265  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:49.040589  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:49.040623  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:49.100203  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:49.100237  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:49.135876  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:49.135909  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:49.168685  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:49.168756  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:49.211941  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:49.212009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:49.278129  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:49.270279   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.271015   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272492   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272926   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.274542   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:49.270279   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.271015   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272492   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272926   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.274542   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:49.278151  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:49.278166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:49.355582  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:49.355620  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:49.385861  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:49.385888  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:51.961962  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:51.973739  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:51.973839  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:52.007060  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:52.007089  306747 cri.go:89] found id: ""
	I1017 19:30:52.007098  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:52.007173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.011950  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:52.012025  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:52.043424  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:52.043445  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:52.043450  306747 cri.go:89] found id: ""
	I1017 19:30:52.043458  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:52.043515  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.048102  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.051750  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:52.051836  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:52.091285  306747 cri.go:89] found id: ""
	I1017 19:30:52.091362  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.091384  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:52.091412  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:52.091533  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:52.120853  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:52.120928  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:52.120947  306747 cri.go:89] found id: ""
	I1017 19:30:52.120962  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:52.121037  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.125047  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.128913  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:52.129029  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:52.155112  306747 cri.go:89] found id: ""
	I1017 19:30:52.155138  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.155147  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:52.155153  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:52.155217  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:52.181654  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:52.181678  306747 cri.go:89] found id: ""
	I1017 19:30:52.181686  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:52.181738  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.185468  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:52.185538  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:52.210532  306747 cri.go:89] found id: ""
	I1017 19:30:52.210558  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.210567  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:52.210577  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:52.210591  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:52.283758  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:52.283793  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:52.321133  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:52.321172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:52.349409  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:52.349440  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:52.454035  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:52.454072  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:52.474228  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:52.474336  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:52.549970  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:52.541938   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.542794   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.543926   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.544704   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.546272   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:52.541938   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.542794   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.543926   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.544704   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.546272   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:52.550045  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:52.550073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:52.637174  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:52.637221  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:52.668341  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:52.668418  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:52.761051  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:52.761091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:52.792065  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:52.792160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.319606  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:55.330935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:55.331008  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:55.358717  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.358739  306747 cri.go:89] found id: ""
	I1017 19:30:55.358747  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:55.358802  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.362654  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:55.362769  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:55.397277  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:55.397301  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:55.397306  306747 cri.go:89] found id: ""
	I1017 19:30:55.397314  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:55.397368  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.401240  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.405131  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:55.405244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:55.432480  306747 cri.go:89] found id: ""
	I1017 19:30:55.432602  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.432627  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:55.432666  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:55.432750  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:55.465240  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:55.465314  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:55.465333  306747 cri.go:89] found id: ""
	I1017 19:30:55.465357  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:55.465448  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.469415  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.473023  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:55.473096  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:55.499608  306747 cri.go:89] found id: ""
	I1017 19:30:55.499681  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.499704  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:55.499724  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:55.499814  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:55.526471  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:55.526494  306747 cri.go:89] found id: ""
	I1017 19:30:55.526502  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:55.526586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.530319  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:55.530395  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:55.558617  306747 cri.go:89] found id: ""
	I1017 19:30:55.558639  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.558647  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:55.558656  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:55.558668  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:55.578357  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:55.578390  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:55.642730  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:55.635023   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.635478   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637010   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637409   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.638832   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:55.635023   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.635478   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637010   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637409   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.638832   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:55.642749  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:55.642763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.673301  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:55.673329  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:55.735266  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:55.735301  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:55.777444  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:55.777474  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:55.891903  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:55.891985  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:55.976455  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:55.976492  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:56.005202  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:56.005238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:56.034021  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:56.034049  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:56.086550  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:56.086581  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:58.687094  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:58.698343  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:58.698420  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:58.737082  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:58.737144  306747 cri.go:89] found id: ""
	I1017 19:30:58.737165  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:58.737251  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.740769  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:58.740830  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:58.768900  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:58.768920  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:58.768931  306747 cri.go:89] found id: ""
	I1017 19:30:58.768938  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:58.768991  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.773597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.777023  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:58.777094  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:58.808627  306747 cri.go:89] found id: ""
	I1017 19:30:58.808654  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.808675  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:58.808681  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:58.808778  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:58.833787  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:58.833810  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:58.833815  306747 cri.go:89] found id: ""
	I1017 19:30:58.833823  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:58.833902  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.837729  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.841076  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:58.841161  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:58.876060  306747 cri.go:89] found id: ""
	I1017 19:30:58.876089  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.876099  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:58.876107  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:58.876189  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:58.906434  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:58.906509  306747 cri.go:89] found id: ""
	I1017 19:30:58.906524  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:58.906598  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.911053  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:58.911127  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:58.936724  306747 cri.go:89] found id: ""
	I1017 19:30:58.936748  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.936757  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:58.936765  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:58.936776  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:59.014607  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:59.014643  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:59.044576  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:59.044655  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:59.124177  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:59.124211  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:59.156709  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:59.156737  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:59.175384  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:59.175413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:59.209100  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:59.209136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:59.235216  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:59.235244  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:59.337596  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:59.337631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:59.405118  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:59.396347   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.396989   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.398679   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.399208   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.400795   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:59.396347   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.396989   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.398679   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.399208   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.400795   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:59.405140  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:59.405153  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:59.431225  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:59.431255  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.008171  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:02.020307  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:02.020387  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:02.051051  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:02.051079  306747 cri.go:89] found id: ""
	I1017 19:31:02.051099  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:02.051161  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.056015  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:02.056088  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:02.089743  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.089817  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:02.089836  306747 cri.go:89] found id: ""
	I1017 19:31:02.089856  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:02.089943  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.093857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.097708  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:02.097837  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:02.123389  306747 cri.go:89] found id: ""
	I1017 19:31:02.123411  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.123420  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:02.123426  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:02.123483  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:02.150505  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:02.150582  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:02.150596  306747 cri.go:89] found id: ""
	I1017 19:31:02.150605  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:02.150681  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.154543  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.158104  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:02.158177  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:02.186868  306747 cri.go:89] found id: ""
	I1017 19:31:02.186895  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.186904  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:02.186911  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:02.186974  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:02.215359  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:02.215426  306747 cri.go:89] found id: ""
	I1017 19:31:02.215451  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:02.215524  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.219153  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:02.219266  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:02.246345  306747 cri.go:89] found id: ""
	I1017 19:31:02.246371  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.246381  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:02.246391  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:02.246402  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:02.280313  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:02.280387  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:02.385786  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:02.385822  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:02.414602  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:02.414679  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:02.492313  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:02.492350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:02.511027  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:02.511067  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:02.590723  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:02.582016   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.582767   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.584046   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.585740   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.586186   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:02.582016   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.582767   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.584046   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.585740   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.586186   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:02.590747  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:02.590762  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.653228  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:02.653264  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:02.687148  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:02.687183  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:02.790229  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:02.790269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:02.819586  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:02.819615  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.355439  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:05.367250  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:05.367353  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:05.393587  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:05.393611  306747 cri.go:89] found id: ""
	I1017 19:31:05.393620  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:05.393674  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.397564  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:05.397685  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:05.423815  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:05.423840  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:05.423845  306747 cri.go:89] found id: ""
	I1017 19:31:05.423853  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:05.423921  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.427632  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.431060  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:05.431129  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:05.457152  306747 cri.go:89] found id: ""
	I1017 19:31:05.457176  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.457186  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:05.457192  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:05.457256  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:05.483757  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:05.483779  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:05.483784  306747 cri.go:89] found id: ""
	I1017 19:31:05.483791  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:05.483845  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.487471  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.490789  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:05.490859  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:05.516653  306747 cri.go:89] found id: ""
	I1017 19:31:05.516676  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.516684  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:05.516690  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:05.516793  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:05.542033  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.542059  306747 cri.go:89] found id: ""
	I1017 19:31:05.542091  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:05.542153  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.545908  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:05.545978  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:05.571870  306747 cri.go:89] found id: ""
	I1017 19:31:05.571892  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.571901  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:05.571909  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:05.571923  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:05.649030  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:05.639899   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.640483   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642053   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642716   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.644399   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:05.639899   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.640483   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642053   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642716   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.644399   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:05.649050  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:05.649062  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:05.677036  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:05.677065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:05.718764  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:05.718795  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:05.803861  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:05.803897  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:05.835788  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:05.835814  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.864823  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:05.864853  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:05.947756  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:05.947788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:05.979938  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:05.980005  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:06.080355  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:06.080392  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:06.104116  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:06.104145  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:08.667177  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:08.677727  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:08.677793  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:08.704338  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:08.704362  306747 cri.go:89] found id: ""
	I1017 19:31:08.704370  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:08.704422  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.707981  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:08.708049  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:08.733111  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:08.733130  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:08.733135  306747 cri.go:89] found id: ""
	I1017 19:31:08.733142  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:08.733201  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.737039  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.740374  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:08.740480  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:08.768239  306747 cri.go:89] found id: ""
	I1017 19:31:08.768307  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.768338  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:08.768381  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:08.768471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:08.795436  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:08.795499  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:08.795524  306747 cri.go:89] found id: ""
	I1017 19:31:08.795537  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:08.795609  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.799450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.803242  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:08.803312  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:08.831323  306747 cri.go:89] found id: ""
	I1017 19:31:08.831348  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.831358  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:08.831364  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:08.831427  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:08.865991  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:08.866014  306747 cri.go:89] found id: ""
	I1017 19:31:08.866022  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:08.866077  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.870085  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:08.870174  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:08.905447  306747 cri.go:89] found id: ""
	I1017 19:31:08.905475  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.905483  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:08.905492  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:08.905504  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:08.988463  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:08.988574  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:09.021674  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:09.021711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:09.050080  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:09.050111  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:09.126939  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:09.126972  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:09.161551  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:09.161580  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:09.179459  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:09.179490  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:09.209038  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:09.209066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:09.271767  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:09.271810  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:09.373919  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:09.373956  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:09.439533  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:09.431442   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.432120   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.433687   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.434214   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.435793   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:09.431442   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.432120   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.433687   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.434214   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.435793   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:09.439556  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:09.439570  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:11.978816  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:11.990102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:11.990174  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:12.023196  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:12.023225  306747 cri.go:89] found id: ""
	I1017 19:31:12.023235  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:12.023302  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.027739  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:12.027832  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:12.055241  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:12.055265  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:12.055270  306747 cri.go:89] found id: ""
	I1017 19:31:12.055278  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:12.055336  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.059592  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.064052  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:12.064121  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:12.103548  306747 cri.go:89] found id: ""
	I1017 19:31:12.103575  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.103584  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:12.103591  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:12.103650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:12.131971  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:12.131995  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:12.132000  306747 cri.go:89] found id: ""
	I1017 19:31:12.132008  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:12.132063  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.136064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.139529  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:12.139597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:12.165954  306747 cri.go:89] found id: ""
	I1017 19:31:12.165977  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.165985  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:12.165991  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:12.166049  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:12.195543  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:12.195568  306747 cri.go:89] found id: ""
	I1017 19:31:12.195577  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:12.195632  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.199531  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:12.199603  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:12.225881  306747 cri.go:89] found id: ""
	I1017 19:31:12.225911  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.225920  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:12.225929  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:12.225942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:12.259524  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:12.259552  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:12.333075  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:12.333112  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:12.363221  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:12.363249  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:12.467386  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:12.467420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:12.498049  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:12.498077  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:12.577701  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:12.577736  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:12.607614  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:12.607650  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:12.637568  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:12.637597  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:12.717020  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:12.717054  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:12.740140  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:12.740170  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:12.806245  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:12.796625   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.797249   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.799733   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.800324   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.802649   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:12.796625   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.797249   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.799733   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.800324   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.802649   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:15.306473  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:15.318959  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:15.319030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:15.345727  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:15.345823  306747 cri.go:89] found id: ""
	I1017 19:31:15.345847  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:15.345935  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.349860  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:15.349937  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:15.382414  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:15.382437  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:15.382442  306747 cri.go:89] found id: ""
	I1017 19:31:15.382463  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:15.382539  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.386718  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.390470  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:15.390578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:15.417577  306747 cri.go:89] found id: ""
	I1017 19:31:15.417652  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.417668  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:15.417676  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:15.417743  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:15.445163  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:15.445206  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:15.445211  306747 cri.go:89] found id: ""
	I1017 19:31:15.445220  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:15.445305  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.450196  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.453988  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:15.454058  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:15.479623  306747 cri.go:89] found id: ""
	I1017 19:31:15.479647  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.479655  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:15.479662  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:15.479725  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:15.505913  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:15.505936  306747 cri.go:89] found id: ""
	I1017 19:31:15.505953  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:15.506007  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.509808  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:15.509881  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:15.535383  306747 cri.go:89] found id: ""
	I1017 19:31:15.535408  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.535418  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:15.535428  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:15.535440  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:15.561245  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:15.561272  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:15.622736  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:15.622771  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:15.660115  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:15.660150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:15.758501  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:15.758536  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:15.778239  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:15.778273  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:15.857887  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:15.842831   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.843942   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.845164   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.846077   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.848805   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:15.842831   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.843942   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.845164   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.846077   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.848805   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:15.857910  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:15.857926  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:15.946523  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:15.946560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:15.980219  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:15.980245  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:16.013998  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:16.014027  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:16.095391  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:16.095426  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:18.629382  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:18.642985  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:18.643054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:18.669511  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:18.669532  306747 cri.go:89] found id: ""
	I1017 19:31:18.669541  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:18.669601  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.673633  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:18.673707  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:18.702215  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:18.702239  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:18.702244  306747 cri.go:89] found id: ""
	I1017 19:31:18.702252  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:18.702331  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.709379  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.717482  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:18.717554  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:18.744246  306747 cri.go:89] found id: ""
	I1017 19:31:18.744269  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.744277  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:18.744283  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:18.744337  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:18.770169  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:18.770192  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:18.770197  306747 cri.go:89] found id: ""
	I1017 19:31:18.770205  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:18.770271  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.774060  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.777555  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:18.777624  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:18.804459  306747 cri.go:89] found id: ""
	I1017 19:31:18.804485  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.804494  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:18.804500  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:18.804582  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:18.831698  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:18.831721  306747 cri.go:89] found id: ""
	I1017 19:31:18.831730  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:18.831783  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.837132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:18.837273  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:18.870956  306747 cri.go:89] found id: ""
	I1017 19:31:18.870983  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.870992  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:18.871001  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:18.871012  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:18.986913  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:18.986950  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:19.007461  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:19.007493  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:19.035000  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:19.035029  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:19.116120  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:19.116154  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:19.146274  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:19.146303  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:19.226087  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:19.226126  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:19.274249  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:19.274285  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:19.342797  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:19.333272   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.333919   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.335774   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.336320   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.338756   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:19.333272   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.333919   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.335774   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.336320   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.338756   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:19.342824  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:19.342837  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:19.405167  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:19.405241  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:19.437359  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:19.437389  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:21.966216  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:21.977051  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:21.977124  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:22.010370  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:22.010393  306747 cri.go:89] found id: ""
	I1017 19:31:22.010401  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:22.010463  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.014786  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:22.014905  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:22.054881  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:22.054905  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:22.054910  306747 cri.go:89] found id: ""
	I1017 19:31:22.054917  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:22.054974  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.058919  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.062725  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:22.062801  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:22.092827  306747 cri.go:89] found id: ""
	I1017 19:31:22.092910  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.092926  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:22.092935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:22.093011  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:22.120574  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:22.120597  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:22.120602  306747 cri.go:89] found id: ""
	I1017 19:31:22.120609  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:22.120665  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.124579  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.128240  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:22.128314  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:22.155355  306747 cri.go:89] found id: ""
	I1017 19:31:22.155382  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.155392  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:22.155398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:22.155457  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:22.182686  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:22.182750  306747 cri.go:89] found id: ""
	I1017 19:31:22.182771  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:22.182857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.186655  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:22.186754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:22.211995  306747 cri.go:89] found id: ""
	I1017 19:31:22.212020  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.212029  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:22.212038  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:22.212080  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:22.310483  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:22.310518  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:22.376696  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:22.367517   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.368315   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370151   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370790   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.372572   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:22.367517   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.368315   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370151   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370790   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.372572   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:22.376758  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:22.376778  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:22.406493  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:22.406521  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:22.425071  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:22.425110  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:22.454385  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:22.454416  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:22.516625  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:22.516662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:22.551521  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:22.551555  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:22.645961  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:22.645999  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:22.676665  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:22.676691  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:22.757888  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:22.758011  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:25.307695  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:25.318532  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:25.318666  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:25.351844  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:25.351866  306747 cri.go:89] found id: ""
	I1017 19:31:25.351873  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:25.351936  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.355571  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:25.355637  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:25.382616  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:25.382640  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:25.382646  306747 cri.go:89] found id: ""
	I1017 19:31:25.382664  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:25.382717  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.386649  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.390174  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:25.390311  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:25.417606  306747 cri.go:89] found id: ""
	I1017 19:31:25.417630  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.417639  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:25.417645  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:25.417706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:25.445452  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:25.445475  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:25.445480  306747 cri.go:89] found id: ""
	I1017 19:31:25.445487  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:25.445541  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.449471  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.452872  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:25.452956  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:25.480615  306747 cri.go:89] found id: ""
	I1017 19:31:25.480648  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.480658  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:25.480664  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:25.480732  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:25.507575  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:25.507595  306747 cri.go:89] found id: ""
	I1017 19:31:25.507603  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:25.507669  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.512130  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:25.512199  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:25.539371  306747 cri.go:89] found id: ""
	I1017 19:31:25.539441  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.539463  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:25.539488  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:25.539527  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:25.619877  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:25.619914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:25.638042  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:25.638071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:25.677301  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:25.677335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:25.768647  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:25.768682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:25.808421  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:25.808456  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:25.833684  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:25.833709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:25.930177  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:25.930222  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:25.981992  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:25.982022  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:26.087083  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:26.087123  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:26.158486  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:26.150658   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.151278   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.152877   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.153291   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.154745   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:26.150658   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.151278   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.152877   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.153291   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.154745   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:26.158506  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:26.158519  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:28.685675  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:28.697159  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:28.697228  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:28.724197  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:28.724223  306747 cri.go:89] found id: ""
	I1017 19:31:28.724231  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:28.724294  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.728163  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:28.728249  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:28.755375  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:28.755400  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:28.755405  306747 cri.go:89] found id: ""
	I1017 19:31:28.755413  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:28.755465  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.759475  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.762827  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:28.762901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:28.788123  306747 cri.go:89] found id: ""
	I1017 19:31:28.788150  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.788159  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:28.788165  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:28.788221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:28.818579  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:28.818611  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:28.818617  306747 cri.go:89] found id: ""
	I1017 19:31:28.818624  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:28.818677  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.822375  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.825827  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:28.825901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:28.856344  306747 cri.go:89] found id: ""
	I1017 19:31:28.856371  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.856379  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:28.856386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:28.856456  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:28.883877  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:28.883901  306747 cri.go:89] found id: ""
	I1017 19:31:28.883909  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:28.883969  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.890405  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:28.890482  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:28.919970  306747 cri.go:89] found id: ""
	I1017 19:31:28.919997  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.920007  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:28.920016  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:28.920028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:28.938590  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:28.938619  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:29.012463  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:29.012502  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:29.051714  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:29.051751  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:29.139864  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:29.139904  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:29.167130  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:29.167157  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:29.244122  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:29.244163  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:29.289243  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:29.289271  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:29.365219  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:29.356772   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.357390   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.358919   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.359407   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.360893   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:29.356772   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.357390   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.358919   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.359407   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.360893   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:29.365246  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:29.365260  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:29.391983  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:29.392013  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:29.418030  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:29.418136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:32.016682  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:32.027928  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:32.028056  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:32.057743  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:32.057770  306747 cri.go:89] found id: ""
	I1017 19:31:32.057779  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:32.057832  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.062215  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:32.062350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:32.096282  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:32.096359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:32.096379  306747 cri.go:89] found id: ""
	I1017 19:31:32.096402  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:32.096490  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.100272  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.104020  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:32.104094  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:32.130658  306747 cri.go:89] found id: ""
	I1017 19:31:32.130684  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.130692  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:32.130698  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:32.130785  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:32.158436  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:32.158459  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:32.158464  306747 cri.go:89] found id: ""
	I1017 19:31:32.158472  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:32.158524  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.162501  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.165977  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:32.166093  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:32.192337  306747 cri.go:89] found id: ""
	I1017 19:31:32.192414  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.192438  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:32.192460  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:32.192566  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:32.224591  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:32.224625  306747 cri.go:89] found id: ""
	I1017 19:31:32.224643  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:32.224699  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.228992  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:32.229114  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:32.263902  306747 cri.go:89] found id: ""
	I1017 19:31:32.263936  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.263945  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:32.263954  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:32.263970  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:32.331346  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:32.321358   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.322175   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325150   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325743   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.327508   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:32.321358   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.322175   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325150   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325743   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.327508   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:32.331370  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:32.331383  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:32.358344  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:32.358372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:32.419310  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:32.419347  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:32.462060  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:32.462091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:32.543672  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:32.543709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:32.572300  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:32.572327  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:32.650752  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:32.650785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:32.687208  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:32.687239  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:32.785332  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:32.785370  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:32.804237  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:32.804272  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:35.336200  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:35.351300  306747 out.go:203] 
	W1017 19:31:35.354294  306747 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1017 19:31:35.354331  306747 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1017 19:31:35.354341  306747 out.go:285] * Related issues:
	* Related issues:
	W1017 19:31:35.354355  306747 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1017 19:31:35.354368  306747 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1017 19:31:35.357325  306747 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-254035 node list --alsologtostderr -v 5" : exit status 105
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-254035
helpers_test.go:243: (dbg) docker inspect ha-254035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	        "Created": "2025-10-17T19:17:36.603472481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306876,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:23:44.340324163Z",
	            "FinishedAt": "2025-10-17T19:23:43.760876929Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hosts",
	        "LogPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8-json.log",
	        "Name": "/ha-254035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-254035:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-254035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	                "LowerDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-254035",
	                "Source": "/var/lib/docker/volumes/ha-254035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-254035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-254035",
	                "name.minikube.sigs.k8s.io": "ha-254035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d0adb3a8a6f2813284c8f1a167175cc89dcd4664a3ffc878d2459fa2b4bea6d1",
	            "SandboxKey": "/var/run/docker/netns/d0adb3a8a6f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-254035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f1:6c:59:90:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f667d9c3ea201faa6573d33bffc4907012785051d424eb86a31b1e09eb8b135",
	                    "EndpointID": "daecfb65c2dbfda1e321a7412bf642ac1f3e72c152f9f670fa4c977e6a8f5b74",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-254035",
	                        "7f770318d5dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-254035 -n ha-254035
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 logs -n 25: (2.224568189s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-254035 cp ha-254035-m03:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035-m03_ha-254035-m02.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m02 sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m02.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m03:/home/docker/cp-test.txt ha-254035-m04:/home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp testdata/cp-test.txt ha-254035-m04:/home/docker/cp-test.txt                                                             │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035-m04.txt │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035:/home/docker/cp-test_ha-254035-m04_ha-254035.txt                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035.txt                                                 │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m02 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m03:/home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node start m02 --alsologtostderr -v 5                                                                                      │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:23 UTC │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │ 17 Oct 25 19:23 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5                                                                                   │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:23:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:23:44.078300  306747 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:23:44.078421  306747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:23:44.078432  306747 out.go:374] Setting ErrFile to fd 2...
	I1017 19:23:44.078438  306747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:23:44.078707  306747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:23:44.079081  306747 out.go:368] Setting JSON to false
	I1017 19:23:44.079937  306747 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7575,"bootTime":1760721449,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:23:44.080008  306747 start.go:141] virtualization:  
	I1017 19:23:44.083220  306747 out.go:179] * [ha-254035] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:23:44.087049  306747 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:23:44.087156  306747 notify.go:220] Checking for updates...
	I1017 19:23:44.093223  306747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:23:44.096040  306747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:44.098900  306747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:23:44.101720  306747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:23:44.104684  306747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:23:44.108337  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:44.108506  306747 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:23:44.135326  306747 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:23:44.135444  306747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:23:44.192131  306747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:23:44.183230595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:23:44.192236  306747 docker.go:318] overlay module found
	I1017 19:23:44.195310  306747 out.go:179] * Using the docker driver based on existing profile
	I1017 19:23:44.198085  306747 start.go:305] selected driver: docker
	I1017 19:23:44.198103  306747 start.go:925] validating driver "docker" against &{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:44.198244  306747 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:23:44.198355  306747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:23:44.253333  306747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:23:44.243935529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:23:44.253792  306747 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:23:44.253819  306747 cni.go:84] Creating CNI manager for ""
	I1017 19:23:44.253877  306747 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:23:44.253928  306747 start.go:349] cluster config:
	{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:44.258934  306747 out.go:179] * Starting "ha-254035" primary control-plane node in "ha-254035" cluster
	I1017 19:23:44.261731  306747 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:23:44.264643  306747 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:23:44.267316  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:44.267375  306747 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 19:23:44.267392  306747 cache.go:58] Caching tarball of preloaded images
	I1017 19:23:44.267402  306747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:23:44.267494  306747 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:23:44.267505  306747 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:23:44.267648  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:44.287307  306747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:23:44.287328  306747 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:23:44.287345  306747 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:23:44.287367  306747 start.go:360] acquireMachinesLock for ha-254035: {Name:mka2e39989b9cf6078778e7f6519885462ea711f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:23:44.287430  306747 start.go:364] duration metric: took 44.061µs to acquireMachinesLock for "ha-254035"
	I1017 19:23:44.287455  306747 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:23:44.287461  306747 fix.go:54] fixHost starting: 
	I1017 19:23:44.287734  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:23:44.304208  306747 fix.go:112] recreateIfNeeded on ha-254035: state=Stopped err=<nil>
	W1017 19:23:44.304236  306747 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:23:44.307544  306747 out.go:252] * Restarting existing docker container for "ha-254035" ...
	I1017 19:23:44.307642  306747 cli_runner.go:164] Run: docker start ha-254035
	I1017 19:23:44.557261  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:23:44.582382  306747 kic.go:430] container "ha-254035" state is running.
	I1017 19:23:44.582813  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:44.609625  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:44.609882  306747 machine.go:93] provisionDockerMachine start ...
	I1017 19:23:44.609944  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:44.630467  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:44.634045  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:44.634070  306747 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:23:44.634815  306747 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:23:47.792030  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:23:47.792065  306747 ubuntu.go:182] provisioning hostname "ha-254035"
	I1017 19:23:47.792127  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:47.809622  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:47.809936  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:47.809952  306747 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035 && echo "ha-254035" | sudo tee /etc/hostname
	I1017 19:23:47.965159  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:23:47.965243  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:47.983936  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:47.984247  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:47.984262  306747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:23:48.140890  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:23:48.140965  306747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:23:48.140998  306747 ubuntu.go:190] setting up certificates
	I1017 19:23:48.141008  306747 provision.go:84] configureAuth start
	I1017 19:23:48.141069  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:48.158600  306747 provision.go:143] copyHostCerts
	I1017 19:23:48.158645  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:48.158680  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:23:48.158692  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:48.158773  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:23:48.158860  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:48.158883  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:23:48.158892  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:48.158921  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:23:48.158969  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:48.158990  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:23:48.158998  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:48.159024  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:23:48.159076  306747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035 san=[127.0.0.1 192.168.49.2 ha-254035 localhost minikube]
	I1017 19:23:49.196726  306747 provision.go:177] copyRemoteCerts
	I1017 19:23:49.196790  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:23:49.196831  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.213909  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.316345  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:23:49.316405  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:23:49.333689  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:23:49.333750  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1017 19:23:49.350869  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:23:49.350938  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:23:49.369234  306747 provision.go:87] duration metric: took 1.228212253s to configureAuth
	I1017 19:23:49.369303  306747 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:23:49.369552  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:49.369665  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.386704  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:49.387020  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:49.387042  306747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:23:49.707607  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:23:49.707692  306747 machine.go:96] duration metric: took 5.097783711s to provisionDockerMachine
	I1017 19:23:49.707720  306747 start.go:293] postStartSetup for "ha-254035" (driver="docker")
	I1017 19:23:49.707762  306747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:23:49.707871  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:23:49.707943  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.732798  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.836574  306747 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:23:49.839984  306747 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:23:49.840010  306747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:23:49.840021  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:23:49.840085  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:23:49.840181  306747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:23:49.840196  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:23:49.840298  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:23:49.847846  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:49.865445  306747 start.go:296] duration metric: took 157.679358ms for postStartSetup
	I1017 19:23:49.865569  306747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:23:49.865624  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.889188  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.989662  306747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:23:49.994825  306747 fix.go:56] duration metric: took 5.707355296s for fixHost
	I1017 19:23:49.994852  306747 start.go:83] releasing machines lock for "ha-254035", held for 5.707408965s
	I1017 19:23:49.994927  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:50.015297  306747 ssh_runner.go:195] Run: cat /version.json
	I1017 19:23:50.015360  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:50.015301  306747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:23:50.015521  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:50.036378  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:50.050179  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:50.238257  306747 ssh_runner.go:195] Run: systemctl --version
	I1017 19:23:50.244735  306747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:23:50.281650  306747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:23:50.286151  306747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:23:50.286279  306747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:23:50.294085  306747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:23:50.294116  306747 start.go:495] detecting cgroup driver to use...
	I1017 19:23:50.294156  306747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:23:50.294238  306747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:23:50.309600  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:23:50.322860  306747 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:23:50.322932  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:23:50.338234  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:23:50.351355  306747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:23:50.467572  306747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:23:50.583217  306747 docker.go:234] disabling docker service ...
	I1017 19:23:50.583338  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:23:50.598924  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:23:50.611975  306747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:23:50.724286  306747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:23:50.847044  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:23:50.859364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:23:50.873503  306747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:23:50.873573  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.882985  306747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:23:50.883056  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.892747  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.902591  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.911060  306747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:23:50.919007  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.928031  306747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.936934  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.945620  306747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:23:50.953208  306747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:23:50.960459  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:23:51.085184  306747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:23:51.215570  306747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:23:51.215643  306747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:23:51.219416  306747 start.go:563] Will wait 60s for crictl version
	I1017 19:23:51.219481  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:23:51.222932  306747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:23:51.247803  306747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:23:51.247951  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:23:51.276815  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:23:51.309138  306747 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:23:51.311805  306747 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:23:51.327519  306747 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:23:51.331666  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:23:51.341689  306747 kubeadm.go:883] updating cluster {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:23:51.341851  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:51.341916  306747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:23:51.379317  306747 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:23:51.379341  306747 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:23:51.379396  306747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:23:51.405884  306747 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:23:51.405906  306747 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:23:51.405918  306747 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:23:51.406057  306747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:23:51.406155  306747 ssh_runner.go:195] Run: crio config
	I1017 19:23:51.475467  306747 cni.go:84] Creating CNI manager for ""
	I1017 19:23:51.475497  306747 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:23:51.475520  306747 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:23:51.475544  306747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-254035 NodeName:ha-254035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:23:51.475670  306747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-254035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:23:51.475693  306747 kube-vip.go:115] generating kube-vip config ...
	I1017 19:23:51.475756  306747 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:23:51.487989  306747 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:23:51.488119  306747 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:23:51.488198  306747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:23:51.496044  306747 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:23:51.496117  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 19:23:51.503891  306747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 19:23:51.517028  306747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:23:51.530699  306747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 19:23:51.544563  306747 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:23:51.557994  306747 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:23:51.561600  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:23:51.571313  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:23:51.690597  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:23:51.707379  306747 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.2
	I1017 19:23:51.707451  306747 certs.go:195] generating shared ca certs ...
	I1017 19:23:51.707483  306747 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:51.707678  306747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:23:51.707765  306747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:23:51.707807  306747 certs.go:257] generating profile certs ...
	I1017 19:23:51.707925  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:23:51.707978  306747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea
	I1017 19:23:51.708011  306747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1017 19:23:52.143690  306747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea ...
	I1017 19:23:52.143724  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea: {Name:mk84072e95c642d9de97a7b2d7684c1b2411f2c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:52.143929  306747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea ...
	I1017 19:23:52.143944  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea: {Name:mk1e13a21ca5f9f77c2e8e2d4f37d2c902696b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:52.144031  306747 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt
	I1017 19:23:52.144173  306747 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key
	I1017 19:23:52.144307  306747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:23:52.144326  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:23:52.144342  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:23:52.144362  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:23:52.144377  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:23:52.144396  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:23:52.144419  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:23:52.144435  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:23:52.144450  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:23:52.144501  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:23:52.144555  306747 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:23:52.144570  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:23:52.144594  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:23:52.144621  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:23:52.144646  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:23:52.144696  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:52.144726  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.144744  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.144760  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.145349  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:23:52.164836  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:23:52.182173  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:23:52.200320  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:23:52.220031  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:23:52.239993  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:23:52.259787  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:23:52.278396  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:23:52.296286  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:23:52.313979  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:23:52.331810  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:23:52.349798  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:23:52.364237  306747 ssh_runner.go:195] Run: openssl version
	I1017 19:23:52.376391  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:23:52.385410  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.389746  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.389837  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.434948  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:23:52.443397  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:23:52.452268  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.460529  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.460626  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.518909  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:23:52.528730  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:23:52.541129  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.545573  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.545658  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.629233  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:23:52.650967  306747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:23:52.657469  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:23:52.741430  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:23:52.801484  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:23:52.855613  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:23:52.911294  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:23:52.960715  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:23:53.023389  306747 kubeadm.go:400] StartCluster: {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:53.023526  306747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:23:53.023593  306747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:23:53.070982  306747 cri.go:89] found id: "a9f69dd8228df806b3caf0a6a77814b3035f6624474afd789ff17d36b93becbb"
	I1017 19:23:53.071006  306747 cri.go:89] found id: "2dc181e1d75c199e1d878c25f6b4eb381f5134e5e8ff6ed9deea02322d7cdf4c"
	I1017 19:23:53.071011  306747 cri.go:89] found id: "6fb4bcbcf5815899f9ed7e0ee3f40ae912c24131eda2482a13e66f3bf9211953"
	I1017 19:23:53.071015  306747 cri.go:89] found id: "99ffff8c4838d302fd86aa2def104fc0bc5a061a4b4b00a66b6659be26e84f94"
	I1017 19:23:53.071018  306747 cri.go:89] found id: "b745cb636fe8e12797dbad3808d1af04aa579d4fbd2ba8ac91052e88e1d9594d"
	I1017 19:23:53.071022  306747 cri.go:89] found id: ""
	I1017 19:23:53.071070  306747 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:23:53.085921  306747 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:23:53Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:23:53.085995  306747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:23:53.099392  306747 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:23:53.099418  306747 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:23:53.099471  306747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:23:53.118282  306747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:23:53.118709  306747 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-254035" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:53.118820  306747 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-254035" cluster setting kubeconfig missing "ha-254035" context setting]
	I1017 19:23:53.119084  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.119598  306747 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:23:53.120104  306747 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 19:23:53.120124  306747 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 19:23:53.120130  306747 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 19:23:53.120135  306747 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 19:23:53.120142  306747 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 19:23:53.120434  306747 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 19:23:53.120753  306747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:23:53.137306  306747 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 19:23:53.137333  306747 kubeadm.go:601] duration metric: took 37.90723ms to restartPrimaryControlPlane
	I1017 19:23:53.137344  306747 kubeadm.go:402] duration metric: took 113.964982ms to StartCluster
	I1017 19:23:53.137360  306747 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.137421  306747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:53.137983  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.138193  306747 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:23:53.138219  306747 start.go:241] waiting for startup goroutines ...
	I1017 19:23:53.138228  306747 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:23:53.138643  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:53.142436  306747 out.go:179] * Enabled addons: 
	I1017 19:23:53.145409  306747 addons.go:514] duration metric: took 7.175068ms for enable addons: enabled=[]
	I1017 19:23:53.145452  306747 start.go:246] waiting for cluster config update ...
	I1017 19:23:53.145461  306747 start.go:255] writing updated cluster config ...
	I1017 19:23:53.148803  306747 out.go:203] 
	I1017 19:23:53.151893  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:53.152042  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.155214  306747 out.go:179] * Starting "ha-254035-m02" control-plane node in "ha-254035" cluster
	I1017 19:23:53.158764  306747 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:23:53.161709  306747 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:23:53.164610  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:53.164638  306747 cache.go:58] Caching tarball of preloaded images
	I1017 19:23:53.164743  306747 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:23:53.164758  306747 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:23:53.164894  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.165099  306747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:23:53.194887  306747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:23:53.194907  306747 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:23:53.194919  306747 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:23:53.194954  306747 start.go:360] acquireMachinesLock for ha-254035-m02: {Name:mkcf59557cfb2c18712510006a9b88f53e9d8916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:23:53.195003  306747 start.go:364] duration metric: took 34.034µs to acquireMachinesLock for "ha-254035-m02"
	I1017 19:23:53.195021  306747 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:23:53.195027  306747 fix.go:54] fixHost starting: m02
	I1017 19:23:53.195286  306747 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:23:53.230172  306747 fix.go:112] recreateIfNeeded on ha-254035-m02: state=Stopped err=<nil>
	W1017 19:23:53.230198  306747 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:23:53.233425  306747 out.go:252] * Restarting existing docker container for "ha-254035-m02" ...
	I1017 19:23:53.233506  306747 cli_runner.go:164] Run: docker start ha-254035-m02
	I1017 19:23:53.677194  306747 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:23:53.705353  306747 kic.go:430] container "ha-254035-m02" state is running.
	I1017 19:23:53.705741  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:23:53.741365  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.741612  306747 machine.go:93] provisionDockerMachine start ...
	I1017 19:23:53.741677  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:53.774362  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:53.774683  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:53.774700  306747 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:23:53.776617  306747 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32782->127.0.0.1:33179: read: connection reset by peer
	I1017 19:23:57.101345  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:23:57.101367  306747 ubuntu.go:182] provisioning hostname "ha-254035-m02"
	I1017 19:23:57.101452  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:57.129925  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:57.130248  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:57.130260  306747 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m02 && echo "ha-254035-m02" | sudo tee /etc/hostname
	I1017 19:23:57.485252  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:23:57.485332  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:57.518218  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:57.518523  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:57.518547  306747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:23:57.769807  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:23:57.769837  306747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:23:57.769852  306747 ubuntu.go:190] setting up certificates
	I1017 19:23:57.769861  306747 provision.go:84] configureAuth start
	I1017 19:23:57.769925  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:23:57.808507  306747 provision.go:143] copyHostCerts
	I1017 19:23:57.808576  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:57.808611  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:23:57.808621  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:57.808702  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:23:57.808777  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:57.808795  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:23:57.808799  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:57.808824  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:23:57.808885  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:57.808900  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:23:57.808904  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:57.808927  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:23:57.808973  306747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m02 san=[127.0.0.1 192.168.49.3 ha-254035-m02 localhost minikube]
	I1017 19:23:58.970392  306747 provision.go:177] copyRemoteCerts
	I1017 19:23:58.970466  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:23:58.970517  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:58.988411  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:23:59.109264  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:23:59.109327  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:23:59.143927  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:23:59.144007  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:23:59.175735  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:23:59.175798  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:23:59.207513  306747 provision.go:87] duration metric: took 1.437637997s to configureAuth
	I1017 19:23:59.207541  306747 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:23:59.207787  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:59.207891  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.254211  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:59.254534  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:59.254554  306747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:23:59.802396  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:23:59.802506  306747 machine.go:96] duration metric: took 6.06086173s to provisionDockerMachine
	I1017 19:23:59.802537  306747 start.go:293] postStartSetup for "ha-254035-m02" (driver="docker")
	I1017 19:23:59.802584  306747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:23:59.802692  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:23:59.802768  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.826274  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:23:59.933472  306747 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:23:59.937860  306747 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:23:59.937890  306747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:23:59.937902  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:23:59.937957  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:23:59.938045  306747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:23:59.938058  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:23:59.938173  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:23:59.946632  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:59.974586  306747 start.go:296] duration metric: took 172.005858ms for postStartSetup
	I1017 19:23:59.974693  306747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:23:59.974736  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.998482  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:00.178671  306747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:24:00.215855  306747 fix.go:56] duration metric: took 7.020817171s for fixHost
	I1017 19:24:00.215889  306747 start.go:83] releasing machines lock for "ha-254035-m02", held for 7.020877911s
	I1017 19:24:00.215976  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:24:00.366887  306747 out.go:179] * Found network options:
	I1017 19:24:00.370345  306747 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 19:24:00.373400  306747 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:24:00.373520  306747 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:24:00.373638  306747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:24:00.373712  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:24:00.373921  306747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:24:00.373955  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:24:00.473797  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:00.502501  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:01.163570  306747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:24:01.201188  306747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:24:01.201285  306747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:24:01.221545  306747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:24:01.221578  306747 start.go:495] detecting cgroup driver to use...
	I1017 19:24:01.221624  306747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:24:01.221679  306747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:24:01.249432  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:24:01.274115  306747 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:24:01.274197  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:24:01.300156  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:24:01.327634  306747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:24:01.676293  306747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:24:01.963473  306747 docker.go:234] disabling docker service ...
	I1017 19:24:01.963548  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:24:01.985469  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:24:02.006761  306747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:24:02.326335  306747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:24:02.689696  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:24:02.707153  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:24:02.733380  306747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:24:02.733503  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.745270  306747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:24:02.745354  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.761212  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.777017  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.786654  306747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:24:02.797775  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.809053  306747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.819042  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.830450  306747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:24:02.839137  306747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:24:02.853061  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:24:03.081615  306747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:25:33.444575  306747 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.36287356s)
	I1017 19:25:33.444601  306747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:25:33.444663  306747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:25:33.448790  306747 start.go:563] Will wait 60s for crictl version
	I1017 19:25:33.448855  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:25:33.452484  306747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:25:33.483181  306747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:25:33.483261  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:25:33.520275  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:25:33.555708  306747 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:25:33.558710  306747 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:25:33.561569  306747 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:25:33.577269  306747 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:25:33.581166  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:25:33.590512  306747 mustload.go:65] Loading cluster: ha-254035
	I1017 19:25:33.590749  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:25:33.591003  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:25:33.607631  306747 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:25:33.607910  306747 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.3
	I1017 19:25:33.607918  306747 certs.go:195] generating shared ca certs ...
	I1017 19:25:33.607932  306747 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:25:33.608031  306747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:25:33.608069  306747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:25:33.608076  306747 certs.go:257] generating profile certs ...
	I1017 19:25:33.608151  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:25:33.608210  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.5a836dc6
	I1017 19:25:33.608248  306747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:25:33.608256  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:25:33.608268  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:25:33.608278  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:25:33.608288  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:25:33.608298  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:25:33.608314  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:25:33.608325  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:25:33.608334  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:25:33.608382  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:25:33.608409  306747 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:25:33.608418  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:25:33.608439  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:25:33.608460  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:25:33.608482  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:25:33.608557  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:25:33.608586  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:25:33.608606  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:33.608635  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:25:33.608691  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:25:33.626221  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:25:33.720799  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:25:33.724641  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:25:33.732808  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:25:33.736200  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:25:33.744126  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:25:33.747465  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:25:33.755494  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:25:33.759075  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:25:33.767011  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:25:33.770516  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:25:33.778582  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:25:33.781925  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:25:33.789662  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:25:33.814144  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:25:33.834289  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:25:33.855264  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:25:33.875243  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:25:33.892238  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:25:33.909902  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:25:33.927819  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:25:33.945089  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:25:33.970864  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:25:33.990984  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:25:34.011449  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:25:34.027436  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:25:34.042890  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:25:34.058368  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:25:34.072057  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:25:34.088147  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:25:34.104554  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:25:34.119006  306747 ssh_runner.go:195] Run: openssl version
	I1017 19:25:34.125500  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:25:34.134066  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.138184  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.138272  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.179366  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:25:34.187225  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:25:34.195194  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.198812  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.198884  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.240748  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:25:34.248576  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:25:34.256442  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.260252  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.260343  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.301741  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:25:34.309494  306747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:25:34.313266  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:25:34.354021  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:25:34.403496  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:25:34.452995  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:25:34.501920  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:25:34.553096  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:25:34.605637  306747 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 19:25:34.605735  306747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:25:34.605768  306747 kube-vip.go:115] generating kube-vip config ...
	I1017 19:25:34.605818  306747 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:25:34.618260  306747 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:25:34.618384  306747 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:25:34.618473  306747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:25:34.626096  306747 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:25:34.626222  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:25:34.634241  306747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:25:34.648042  306747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:25:34.661462  306747 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:25:34.676617  306747 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:25:34.680227  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:25:34.690889  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:25:34.816737  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:25:34.831088  306747 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:25:34.831560  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:25:34.834934  306747 out.go:179] * Verifying Kubernetes components...
	I1017 19:25:34.837819  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:25:34.968993  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:25:34.983274  306747 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:25:34.983348  306747 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:25:34.983632  306747 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:25:40.996755  306747 node_ready.go:49] node "ha-254035-m02" is "Ready"
	I1017 19:25:40.996789  306747 node_ready.go:38] duration metric: took 6.013138239s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:25:40.996811  306747 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:25:40.996889  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:41.497684  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:41.997836  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:42.497138  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:42.997736  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:43.497602  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:43.997356  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:44.497754  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:44.997290  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:45.497281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:45.997333  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:46.497704  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:46.997128  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:47.497723  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:47.997671  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:48.497561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:48.997733  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:49.497782  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:49.997750  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:50.497774  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:50.997177  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:51.497562  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:51.997821  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:52.497764  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:52.997863  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:53.497099  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:53.997052  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:54.497663  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:54.997664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:55.497701  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:55.997019  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:56.497726  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:56.997168  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:57.497752  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:57.997835  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:58.497010  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:58.997743  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:59.497316  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:59.997012  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:00.497061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:00.997884  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:01.497722  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:01.997039  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:02.497739  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:02.997315  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:03.497590  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:03.997754  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:04.497035  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:04.997744  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:05.497624  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:05.997419  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:06.497061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:06.997596  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:07.497373  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:07.997733  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:08.497364  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:08.997732  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:09.497421  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:09.997728  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:10.497717  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:10.996987  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:11.497090  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:11.996943  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:12.497429  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:12.997010  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:13.496953  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:13.997093  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:14.497074  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:14.997281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:15.497737  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:15.997688  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:16.497625  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:16.997704  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:17.497320  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:17.996949  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:18.497953  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:18.997042  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:19.497090  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:19.997041  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:20.497518  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:20.997019  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:21.497012  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:21.996982  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:22.497045  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:22.997657  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:23.497467  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:23.997803  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:24.497044  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:24.997325  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:25.497747  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:25.997044  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:26.497026  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:26.997552  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:27.497036  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:27.997604  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:28.497701  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:28.997373  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:29.497563  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:29.997697  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:30.497017  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:30.997407  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:31.497716  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:31.997874  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:32.497096  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:32.997561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:33.497057  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:33.997665  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:34.497043  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:34.997691  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:34.997800  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:35.032363  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:35.032386  306747 cri.go:89] found id: ""
	I1017 19:26:35.032399  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:35.032460  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.036381  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:35.036459  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:35.065338  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:35.065359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:35.065364  306747 cri.go:89] found id: ""
	I1017 19:26:35.065371  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:35.065425  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.069065  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.072703  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:35.072774  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:35.103898  306747 cri.go:89] found id: ""
	I1017 19:26:35.103925  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.103934  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:35.103941  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:35.104009  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:35.133147  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:35.133171  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:35.133176  306747 cri.go:89] found id: ""
	I1017 19:26:35.133189  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:35.133243  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.137074  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.140598  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:35.140672  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:35.172805  306747 cri.go:89] found id: ""
	I1017 19:26:35.172831  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.172840  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:35.172847  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:35.172921  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:35.200314  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:35.200339  306747 cri.go:89] found id: ""
	I1017 19:26:35.200347  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:35.200399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.204068  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:35.204142  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:35.229333  306747 cri.go:89] found id: ""
	I1017 19:26:35.229355  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.229364  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:35.229373  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:35.229386  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:35.270788  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:35.270824  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:35.327408  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:35.327441  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:35.407924  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:35.407963  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:35.511553  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:35.511590  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:35.532712  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:35.532742  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:35.560601  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:35.560631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:35.605951  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:35.605984  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:35.637220  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:35.637251  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:35.667818  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:35.667848  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:35.697952  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:35.697980  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:36.107033  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:36.098521    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.099526    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.100351    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.101907    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.102306    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:36.098521    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.099526    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.100351    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.101907    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.102306    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:38.608691  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:38.620441  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:38.620597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:38.653949  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:38.653982  306747 cri.go:89] found id: ""
	I1017 19:26:38.653991  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:38.654045  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.657661  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:38.657779  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:38.682961  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:38.682992  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:38.682998  306747 cri.go:89] found id: ""
	I1017 19:26:38.683005  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:38.683057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.686897  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.690246  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:38.690316  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:38.727058  306747 cri.go:89] found id: ""
	I1017 19:26:38.727088  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.727096  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:38.727102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:38.727159  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:38.751866  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:38.751891  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:38.751895  306747 cri.go:89] found id: ""
	I1017 19:26:38.751902  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:38.751960  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.755561  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.758764  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:38.758835  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:38.791573  306747 cri.go:89] found id: ""
	I1017 19:26:38.791597  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.791607  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:38.791613  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:38.791672  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:38.818970  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:38.818993  306747 cri.go:89] found id: ""
	I1017 19:26:38.819002  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:38.819054  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.822644  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:38.822766  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:38.849350  306747 cri.go:89] found id: ""
	I1017 19:26:38.849373  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.849381  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:38.849390  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:38.849436  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:38.883482  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:38.883512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:38.978629  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:38.978664  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:39.055121  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:39.045881    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.046283    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.047962    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.048507    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.050096    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:39.045881    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.046283    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.047962    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.048507    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.050096    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:39.055145  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:39.055158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:39.081488  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:39.081516  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:39.123529  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:39.123560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:39.152993  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:39.153024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:39.181581  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:39.181608  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:39.199086  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:39.199116  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:39.231605  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:39.231638  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:39.287509  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:39.287544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:41.868969  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:41.879522  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:41.879591  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:41.906366  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:41.906388  306747 cri.go:89] found id: ""
	I1017 19:26:41.906397  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:41.906450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.909979  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:41.910090  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:41.940072  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:41.940101  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:41.940105  306747 cri.go:89] found id: ""
	I1017 19:26:41.940113  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:41.940173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.945194  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.948667  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:41.948784  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:41.979374  306747 cri.go:89] found id: ""
	I1017 19:26:41.979410  306747 logs.go:282] 0 containers: []
	W1017 19:26:41.979419  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:41.979425  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:41.979492  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:42.008367  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:42.008445  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:42.008465  306747 cri.go:89] found id: ""
	I1017 19:26:42.008493  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:42.008628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.016467  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.031735  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:42.031876  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:42.079629  306747 cri.go:89] found id: ""
	I1017 19:26:42.079665  306747 logs.go:282] 0 containers: []
	W1017 19:26:42.079676  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:42.079684  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:42.079750  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:42.122316  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:42.122342  306747 cri.go:89] found id: ""
	I1017 19:26:42.122351  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:42.122423  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.131137  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:42.131241  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:42.200222  306747 cri.go:89] found id: ""
	I1017 19:26:42.200249  306747 logs.go:282] 0 containers: []
	W1017 19:26:42.200259  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:42.200270  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:42.200283  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:42.314817  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:42.314908  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:42.375712  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:42.375762  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:42.431602  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:42.431639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:42.465004  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:42.465097  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:42.491256  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:42.491284  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:42.567094  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:42.558455    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.559104    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.560757    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.561472    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.563142    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:42.558455    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.559104    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.560757    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.561472    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.563142    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:42.567120  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:42.567134  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:42.597513  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:42.597543  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:42.632231  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:42.632268  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:42.659445  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:42.659478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:42.686189  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:42.686217  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:45.285116  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:45.308457  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:45.308578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:45.374050  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:45.374075  306747 cri.go:89] found id: ""
	I1017 19:26:45.374083  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:45.374152  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.386847  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:45.387031  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:45.432081  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:45.432105  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:45.432111  306747 cri.go:89] found id: ""
	I1017 19:26:45.432129  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:45.432185  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.436568  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.443473  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:45.443575  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:45.473992  306747 cri.go:89] found id: ""
	I1017 19:26:45.474066  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.474095  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:45.474124  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:45.474279  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:45.508735  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:45.508808  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:45.508820  306747 cri.go:89] found id: ""
	I1017 19:26:45.508829  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:45.508889  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.513024  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.517047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:45.517124  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:45.544672  306747 cri.go:89] found id: ""
	I1017 19:26:45.544698  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.544707  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:45.544714  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:45.544814  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:45.577228  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:45.577250  306747 cri.go:89] found id: ""
	I1017 19:26:45.577257  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:45.577316  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.581280  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:45.581379  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:45.608143  306747 cri.go:89] found id: ""
	I1017 19:26:45.608166  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.608174  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:45.608183  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:45.608226  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:45.627200  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:45.627230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:45.699692  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:45.692149    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.692814    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694339    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694730    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.696164    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:45.692149    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.692814    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694339    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694730    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.696164    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:45.699717  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:45.699732  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:45.725239  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:45.725269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:45.766316  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:45.766359  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:45.831866  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:45.831908  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:45.869708  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:45.869736  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:45.910170  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:45.910198  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:46.010455  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:46.010498  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:46.047523  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:46.047559  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:46.076222  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:46.076306  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:48.663425  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:48.673865  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:48.673931  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:48.699244  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:48.699267  306747 cri.go:89] found id: ""
	I1017 19:26:48.699275  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:48.699330  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.702918  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:48.702988  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:48.729193  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:48.729268  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:48.729288  306747 cri.go:89] found id: ""
	I1017 19:26:48.729311  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:48.729390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.732927  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.736821  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:48.736893  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:48.763745  306747 cri.go:89] found id: ""
	I1017 19:26:48.763770  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.763780  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:48.763786  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:48.763842  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:48.790384  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:48.790407  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:48.790413  306747 cri.go:89] found id: ""
	I1017 19:26:48.790420  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:48.790496  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.796703  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.800342  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:48.800409  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:48.825802  306747 cri.go:89] found id: ""
	I1017 19:26:48.825830  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.825839  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:48.825846  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:48.825904  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:48.863208  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:48.863231  306747 cri.go:89] found id: ""
	I1017 19:26:48.863239  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:48.863294  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.866822  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:48.866902  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:48.896937  306747 cri.go:89] found id: ""
	I1017 19:26:48.897017  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.897039  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:48.897080  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:48.897109  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:48.999995  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:49.000071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:49.019541  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:49.019629  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:49.045737  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:49.045806  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:49.106443  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:49.106478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:49.135555  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:49.135583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:49.162643  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:49.162670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:49.240999  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:49.241038  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:49.311820  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:49.304505    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.305101    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.306817    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.307292    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.308350    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:49.304505    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.305101    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.306817    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.307292    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.308350    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:49.311849  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:49.311861  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:49.347575  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:49.347614  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:49.399291  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:49.399328  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:51.931612  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:51.944600  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:51.944667  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:51.977717  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:51.977741  306747 cri.go:89] found id: ""
	I1017 19:26:51.977750  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:51.977808  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:51.981757  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:51.981877  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:52.013943  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:52.013965  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:52.013971  306747 cri.go:89] found id: ""
	I1017 19:26:52.013979  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:52.014034  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.017876  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.021450  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:52.021529  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:52.054762  306747 cri.go:89] found id: ""
	I1017 19:26:52.054788  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.054797  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:52.054804  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:52.054873  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:52.094469  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:52.094492  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:52.094498  306747 cri.go:89] found id: ""
	I1017 19:26:52.094506  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:52.094561  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.099707  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.103487  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:52.103557  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:52.137366  306747 cri.go:89] found id: ""
	I1017 19:26:52.137393  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.137403  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:52.137410  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:52.137494  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:52.164118  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:52.164142  306747 cri.go:89] found id: ""
	I1017 19:26:52.164151  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:52.164235  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.167871  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:52.167951  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:52.195587  306747 cri.go:89] found id: ""
	I1017 19:26:52.195667  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.195691  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:52.195730  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:52.195759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:52.214865  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:52.214895  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:52.252677  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:52.252718  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:52.306241  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:52.306281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:52.362956  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:52.362991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:52.391628  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:52.391659  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:52.471864  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:52.463115    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464242    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464958    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.465978    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.466515    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:52.463115    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464242    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464958    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.465978    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.466515    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:52.471900  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:52.471915  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:52.518448  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:52.518483  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:52.552877  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:52.552904  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:52.635208  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:52.635241  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:52.671244  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:52.671274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:55.270940  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:55.282002  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:55.282081  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:55.307829  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:55.307853  306747 cri.go:89] found id: ""
	I1017 19:26:55.307862  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:55.307917  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.311717  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:55.311788  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:55.337747  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:55.337770  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:55.337775  306747 cri.go:89] found id: ""
	I1017 19:26:55.337783  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:55.337840  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.341583  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.345443  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:55.345519  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:55.374240  306747 cri.go:89] found id: ""
	I1017 19:26:55.374268  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.374277  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:55.374283  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:55.374348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:55.400969  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:55.400994  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:55.400999  306747 cri.go:89] found id: ""
	I1017 19:26:55.401007  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:55.401074  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.405683  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.409216  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:55.409288  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:55.436866  306747 cri.go:89] found id: ""
	I1017 19:26:55.436897  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.436907  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:55.436913  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:55.436972  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:55.469071  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:55.469094  306747 cri.go:89] found id: ""
	I1017 19:26:55.469103  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:55.469160  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.472979  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:55.473075  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:55.504006  306747 cri.go:89] found id: ""
	I1017 19:26:55.504033  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.504043  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:55.504052  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:55.504064  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:55.530026  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:55.530065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:55.566251  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:55.566281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:55.619544  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:55.619580  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:55.647120  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:55.647155  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:55.674483  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:55.674552  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:55.771290  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:55.771328  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:55.791108  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:55.791139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:55.877444  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:55.868298    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.869608    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.870496    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.871568    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.873502    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:55.868298    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.869608    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.870496    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.871568    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.873502    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:55.877467  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:55.877481  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:55.942292  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:55.942327  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:56.029233  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:56.029279  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:58.564639  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:58.575251  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:58.575327  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:58.603745  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:58.603769  306747 cri.go:89] found id: ""
	I1017 19:26:58.603778  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:58.603841  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.607600  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:58.607673  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:58.635364  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:58.635387  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:58.635393  306747 cri.go:89] found id: ""
	I1017 19:26:58.635401  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:58.635459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.639164  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.642599  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:58.642665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:58.671065  306747 cri.go:89] found id: ""
	I1017 19:26:58.671089  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.671098  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:58.671105  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:58.671161  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:58.697581  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:58.697606  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:58.697613  306747 cri.go:89] found id: ""
	I1017 19:26:58.697621  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:58.697701  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.701636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.705721  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:58.705790  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:58.739521  306747 cri.go:89] found id: ""
	I1017 19:26:58.739548  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.739557  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:58.739563  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:58.739618  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:58.766994  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:58.767022  306747 cri.go:89] found id: ""
	I1017 19:26:58.767030  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:58.767085  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.771181  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:58.771253  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:58.798835  306747 cri.go:89] found id: ""
	I1017 19:26:58.798862  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.798871  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:58.798880  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:58.798891  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:58.841984  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:58.842010  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:58.866669  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:58.866697  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:58.916756  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:58.916789  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:58.980015  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:58.980050  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:59.009380  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:59.009409  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:59.109257  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:59.109295  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:59.177549  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:59.168803    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.169600    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.171537    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.172076    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.173678    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:59.168803    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.169600    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.171537    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.172076    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.173678    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:59.177581  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:59.177599  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:59.206699  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:59.206727  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:59.242107  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:59.242142  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:59.275450  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:59.275479  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:01.857354  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:01.869639  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:01.869705  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:01.902744  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:01.902764  306747 cri.go:89] found id: ""
	I1017 19:27:01.902772  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:01.902838  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.906810  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:01.906935  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:01.934659  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:01.934722  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:01.934742  306747 cri.go:89] found id: ""
	I1017 19:27:01.934766  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:01.934853  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.938762  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.946146  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:01.946267  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:01.980395  306747 cri.go:89] found id: ""
	I1017 19:27:01.980461  306747 logs.go:282] 0 containers: []
	W1017 19:27:01.980482  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:01.980505  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:01.980614  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:02.015273  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:02.015298  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:02.015303  306747 cri.go:89] found id: ""
	I1017 19:27:02.015320  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:02.015383  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.019407  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.023456  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:02.023534  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:02.051152  306747 cri.go:89] found id: ""
	I1017 19:27:02.051182  306747 logs.go:282] 0 containers: []
	W1017 19:27:02.051192  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:02.051198  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:02.051258  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:02.080723  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:02.080745  306747 cri.go:89] found id: ""
	I1017 19:27:02.080753  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:02.080813  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.084603  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:02.084678  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:02.120072  306747 cri.go:89] found id: ""
	I1017 19:27:02.120146  306747 logs.go:282] 0 containers: []
	W1017 19:27:02.120170  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:02.120195  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:02.120230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:02.139600  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:02.139631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:02.185131  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:02.185166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:02.229909  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:02.229940  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:02.260111  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:02.260140  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:02.288588  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:02.288618  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:02.370459  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:02.370495  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:02.476572  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:02.476608  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:02.551905  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:02.543576    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.544579    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546057    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546535    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.548140    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:02.543576    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.544579    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546057    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546535    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.548140    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:02.551926  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:02.551940  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:02.578293  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:02.578321  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:02.633456  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:02.633493  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:05.164689  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:05.177240  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:05.177315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:05.205506  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:05.205530  306747 cri.go:89] found id: ""
	I1017 19:27:05.205540  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:05.205597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.209410  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:05.209492  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:05.236360  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:05.236383  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:05.236388  306747 cri.go:89] found id: ""
	I1017 19:27:05.236396  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:05.236448  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.240255  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.243840  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:05.243907  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:05.279749  306747 cri.go:89] found id: ""
	I1017 19:27:05.279788  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.279798  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:05.279804  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:05.279860  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:05.307767  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:05.307790  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:05.307796  306747 cri.go:89] found id: ""
	I1017 19:27:05.307803  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:05.307857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.311429  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.314827  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:05.314906  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:05.340148  306747 cri.go:89] found id: ""
	I1017 19:27:05.340175  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.340184  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:05.340190  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:05.340246  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:05.366040  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:05.366063  306747 cri.go:89] found id: ""
	I1017 19:27:05.366071  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:05.366145  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.369954  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:05.370054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:05.396415  306747 cri.go:89] found id: ""
	I1017 19:27:05.396439  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.396448  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:05.396457  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:05.396468  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:05.491768  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:05.491804  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:05.510133  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:05.510179  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:05.588291  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:05.580157    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.580846    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.582570    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.583481    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.584634    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:05.580157    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.580846    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.582570    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.583481    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.584634    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:05.588313  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:05.588326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:05.616894  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:05.616921  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:05.660215  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:05.660252  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:05.715621  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:05.715657  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:05.744211  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:05.744240  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:05.777510  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:05.777544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:05.808038  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:05.808066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:05.885964  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:05.886000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:08.420171  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:08.431142  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:08.431221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:08.457528  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:08.457552  306747 cri.go:89] found id: ""
	I1017 19:27:08.457561  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:08.457616  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.461556  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:08.461665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:08.492016  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:08.492039  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:08.492044  306747 cri.go:89] found id: ""
	I1017 19:27:08.492052  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:08.492103  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.495761  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.500185  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:08.500282  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:08.526916  306747 cri.go:89] found id: ""
	I1017 19:27:08.526941  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.526950  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:08.526957  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:08.527014  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:08.556113  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:08.556134  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:08.556140  306747 cri.go:89] found id: ""
	I1017 19:27:08.556147  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:08.556214  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.560101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.564014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:08.564084  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:08.594033  306747 cri.go:89] found id: ""
	I1017 19:27:08.594056  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.594071  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:08.594079  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:08.594135  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:08.620047  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:08.620113  306747 cri.go:89] found id: ""
	I1017 19:27:08.620142  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:08.620221  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.624310  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:08.624418  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:08.649502  306747 cri.go:89] found id: ""
	I1017 19:27:08.649567  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.649595  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:08.649623  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:08.649648  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:08.743803  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:08.743839  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:08.769242  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:08.769268  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:08.799565  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:08.799593  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:08.828556  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:08.828635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:08.846407  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:08.846438  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:08.930960  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:08.922375    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.923180    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925039    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925592    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.927335    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:08.922375    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.923180    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925039    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925592    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.927335    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:08.930984  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:08.930996  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:08.989884  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:08.989918  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:09.029740  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:09.029776  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:09.088750  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:09.088784  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:09.174757  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:09.174791  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:11.706527  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:11.717507  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:11.717580  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:11.742517  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:11.742540  306747 cri.go:89] found id: ""
	I1017 19:27:11.742548  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:11.742628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.746473  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:11.746545  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:11.778260  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:11.778322  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:11.778341  306747 cri.go:89] found id: ""
	I1017 19:27:11.778364  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:11.778435  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.782026  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.785484  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:11.785543  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:11.816069  306747 cri.go:89] found id: ""
	I1017 19:27:11.816094  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.816103  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:11.816109  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:11.816175  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:11.841738  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:11.841812  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:11.841832  306747 cri.go:89] found id: ""
	I1017 19:27:11.841848  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:11.841921  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.845737  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.849826  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:11.849962  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:11.877696  306747 cri.go:89] found id: ""
	I1017 19:27:11.877760  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.877783  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:11.877806  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:11.877878  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:11.905454  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:11.905478  306747 cri.go:89] found id: ""
	I1017 19:27:11.905487  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:11.905551  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.909271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:11.909371  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:11.937354  306747 cri.go:89] found id: ""
	I1017 19:27:11.937378  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.937388  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:11.937397  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:11.937408  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:11.964198  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:11.964227  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:12.047655  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:12.047711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:12.152282  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:12.152323  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:12.185576  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:12.185607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:12.216321  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:12.216350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:12.234007  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:12.234037  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:12.302472  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:12.293592    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.294322    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.296814    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.297401    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.299030    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:12.293592    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.294322    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.296814    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.297401    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.299030    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:12.302493  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:12.302508  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:12.361658  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:12.361692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:12.396422  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:12.396455  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:12.450643  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:12.450679  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:14.981141  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:14.992478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:14.992583  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:15.029616  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:15.029652  306747 cri.go:89] found id: ""
	I1017 19:27:15.029662  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:15.029733  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.034198  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:15.034280  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:15.067180  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:15.067204  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:15.067210  306747 cri.go:89] found id: ""
	I1017 19:27:15.067223  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:15.067278  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.071734  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.075202  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:15.075278  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:15.102244  306747 cri.go:89] found id: ""
	I1017 19:27:15.102269  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.102278  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:15.102285  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:15.102345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:15.130161  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:15.130189  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:15.130195  306747 cri.go:89] found id: ""
	I1017 19:27:15.130203  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:15.130258  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.134790  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.138971  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:15.139069  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:15.173861  306747 cri.go:89] found id: ""
	I1017 19:27:15.173886  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.173896  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:15.173903  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:15.173964  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:15.202641  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:15.202665  306747 cri.go:89] found id: ""
	I1017 19:27:15.202674  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:15.202732  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.206633  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:15.206702  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:15.234246  306747 cri.go:89] found id: ""
	I1017 19:27:15.234273  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.234283  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:15.234294  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:15.234305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:15.315039  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:15.315073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:15.418425  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:15.418463  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:15.436291  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:15.436322  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:15.508060  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:15.500418    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.501026    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502514    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502986    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.504397    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:15.500418    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.501026    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502514    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502986    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.504397    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:15.508127  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:15.508156  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:15.541312  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:15.541345  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:15.597746  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:15.597777  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:15.630514  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:15.630544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:15.662426  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:15.662454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:15.690843  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:15.690870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:15.737261  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:15.737305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.271724  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:18.282865  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:18.282933  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:18.310461  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:18.310530  306747 cri.go:89] found id: ""
	I1017 19:27:18.310545  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:18.310598  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.314206  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:18.314277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:18.343711  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:18.343736  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:18.343741  306747 cri.go:89] found id: ""
	I1017 19:27:18.343750  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:18.343827  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.347663  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.351287  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:18.351359  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:18.378302  306747 cri.go:89] found id: ""
	I1017 19:27:18.378329  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.378350  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:18.378356  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:18.378434  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:18.405852  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:18.405876  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.405881  306747 cri.go:89] found id: ""
	I1017 19:27:18.405889  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:18.405977  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.409609  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.413366  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:18.413434  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:18.438274  306747 cri.go:89] found id: ""
	I1017 19:27:18.438308  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.438332  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:18.438348  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:18.438428  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:18.465310  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:18.465379  306747 cri.go:89] found id: ""
	I1017 19:27:18.465394  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:18.465449  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.469114  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:18.469267  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:18.495209  306747 cri.go:89] found id: ""
	I1017 19:27:18.495236  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.495245  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:18.495254  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:18.495269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:18.521513  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:18.521541  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:18.551762  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:18.551788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:18.647502  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:18.647539  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:18.665784  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:18.665815  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:18.718577  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:18.718624  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:18.777594  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:18.777628  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.807963  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:18.807989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:18.892875  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:18.892910  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:18.960765  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:18.951643    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.952944    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.953536    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955189    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955840    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:18.951643    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.952944    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.953536    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955189    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955840    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:18.960787  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:18.960801  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:18.988908  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:18.988936  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:21.525356  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:21.536317  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:21.536383  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:21.562005  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:21.562074  306747 cri.go:89] found id: ""
	I1017 19:27:21.562089  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:21.562148  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.565814  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:21.565899  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:21.593641  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:21.593662  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:21.593668  306747 cri.go:89] found id: ""
	I1017 19:27:21.593675  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:21.593728  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.597715  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.601210  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:21.601286  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:21.626313  306747 cri.go:89] found id: ""
	I1017 19:27:21.626339  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.626349  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:21.626355  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:21.626413  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:21.658772  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:21.658794  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:21.658800  306747 cri.go:89] found id: ""
	I1017 19:27:21.658807  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:21.658866  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.662812  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.666487  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:21.666561  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:21.698844  306747 cri.go:89] found id: ""
	I1017 19:27:21.698905  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.698927  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:21.698951  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:21.699030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:21.728779  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:21.728838  306747 cri.go:89] found id: ""
	I1017 19:27:21.728865  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:21.728939  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.732581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:21.732691  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:21.758611  306747 cri.go:89] found id: ""
	I1017 19:27:21.758636  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.758645  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:21.758655  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:21.758685  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:21.853910  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:21.853951  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:21.929259  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:21.920729    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.921839    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923480    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923794    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.925410    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:21.920729    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.921839    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923480    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923794    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.925410    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:21.929281  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:21.929294  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:21.969445  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:21.969472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:22.060427  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:22.060560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:22.126121  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:22.126202  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:22.196425  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:22.196503  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:22.261955  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:22.262043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:22.285064  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:22.285159  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:22.339749  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:22.339827  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:22.385350  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:22.385427  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:24.966467  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:24.992294  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:24.992366  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:25.035727  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:25.035754  306747 cri.go:89] found id: ""
	I1017 19:27:25.035762  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:25.035847  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.040229  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:25.040304  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:25.088117  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:25.088145  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:25.088152  306747 cri.go:89] found id: ""
	I1017 19:27:25.088159  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:25.088215  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.092329  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.099299  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:25.099383  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:25.150822  306747 cri.go:89] found id: ""
	I1017 19:27:25.150858  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.150868  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:25.150878  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:25.150945  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:25.211825  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:25.211850  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:25.211855  306747 cri.go:89] found id: ""
	I1017 19:27:25.211863  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:25.211927  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.217398  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.221047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:25.221126  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:25.258850  306747 cri.go:89] found id: ""
	I1017 19:27:25.258885  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.258895  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:25.258904  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:25.258968  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:25.295477  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:25.295500  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:25.295512  306747 cri.go:89] found id: ""
	I1017 19:27:25.295520  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:25.295576  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.301386  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.305803  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:25.305873  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:25.334929  306747 cri.go:89] found id: ""
	I1017 19:27:25.334954  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.334970  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:25.334986  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:25.335006  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:25.365373  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:25.365402  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:25.382590  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:25.382626  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:25.432469  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:25.432570  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:25.478525  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:25.478601  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:25.551480  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:25.551560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:25.583783  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:25.583858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:25.679255  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:25.679301  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:25.739090  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:25.739118  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:25.854982  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:25.855021  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:25.955288  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:25.946765    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.947610    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949285    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949589    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.951072    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:25.946765    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.947610    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949285    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949589    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.951072    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:25.955307  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:25.955319  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:26.000458  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:26.000579  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:28.530525  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:28.542430  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:28.542500  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:28.570373  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:28.570394  306747 cri.go:89] found id: ""
	I1017 19:27:28.570402  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:28.570454  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.575832  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:28.575903  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:28.604287  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:28.604307  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:28.604313  306747 cri.go:89] found id: ""
	I1017 19:27:28.604320  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:28.604374  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.608248  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.612312  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:28.612380  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:28.638709  306747 cri.go:89] found id: ""
	I1017 19:27:28.638735  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.638743  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:28.638750  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:28.638807  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:28.665927  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:28.665951  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:28.665957  306747 cri.go:89] found id: ""
	I1017 19:27:28.665964  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:28.666022  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.669671  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.673220  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:28.673317  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:28.703161  306747 cri.go:89] found id: ""
	I1017 19:27:28.703188  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.703197  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:28.703204  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:28.703264  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:28.733314  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:28.733379  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:28.733389  306747 cri.go:89] found id: ""
	I1017 19:27:28.733397  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:28.733460  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.736998  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.740330  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:28.740444  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:28.765130  306747 cri.go:89] found id: ""
	I1017 19:27:28.765156  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.765165  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:28.765174  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:28.765216  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:28.834887  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:28.826610    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.827402    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829127    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829428    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.830934    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:28.826610    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.827402    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829127    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829428    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.830934    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:28.834910  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:28.834923  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:28.870142  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:28.870187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:28.912354  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:28.912388  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:28.968695  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:28.968728  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:29.009047  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:29.009078  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:29.036706  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:29.036734  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:29.120616  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:29.120654  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:29.153285  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:29.153313  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:29.250625  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:29.250664  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:29.271875  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:29.271907  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:29.321668  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:29.321703  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:31.848333  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:31.859324  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:31.859392  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:31.892308  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:31.892331  306747 cri.go:89] found id: ""
	I1017 19:27:31.892347  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:31.892401  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.896342  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:31.896433  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:31.924335  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:31.924359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:31.924364  306747 cri.go:89] found id: ""
	I1017 19:27:31.924371  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:31.924446  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.928119  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.931375  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:31.931444  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:31.961757  306747 cri.go:89] found id: ""
	I1017 19:27:31.961783  306747 logs.go:282] 0 containers: []
	W1017 19:27:31.961792  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:31.961800  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:31.961857  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:31.990900  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:31.990924  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:31.990929  306747 cri.go:89] found id: ""
	I1017 19:27:31.990937  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:31.990997  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.994670  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.998160  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:31.998292  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:32.030448  306747 cri.go:89] found id: ""
	I1017 19:27:32.030523  306747 logs.go:282] 0 containers: []
	W1017 19:27:32.030539  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:32.030548  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:32.030615  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:32.062242  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:32.062267  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:32.062272  306747 cri.go:89] found id: ""
	I1017 19:27:32.062280  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:32.062332  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:32.066062  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:32.069606  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:32.069682  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:32.102492  306747 cri.go:89] found id: ""
	I1017 19:27:32.102534  306747 logs.go:282] 0 containers: []
	W1017 19:27:32.102544  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:32.102553  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:32.102566  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:32.179017  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:32.170484    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.170960    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172496    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172884    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.174718    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:32.170484    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.170960    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172496    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172884    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.174718    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:32.179037  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:32.179050  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:32.225447  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:32.225475  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:32.270526  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:32.270557  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:32.304149  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:32.304181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:32.330757  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:32.330837  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:32.410571  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:32.410610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:32.443417  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:32.443444  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:32.461860  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:32.461890  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:32.510037  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:32.510083  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:32.569278  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:32.569325  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:32.602243  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:32.602269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:35.200643  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:35.211574  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:35.211646  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:35.243134  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:35.243158  306747 cri.go:89] found id: ""
	I1017 19:27:35.243166  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:35.243222  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.247054  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:35.247144  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:35.276216  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:35.276237  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:35.276243  306747 cri.go:89] found id: ""
	I1017 19:27:35.276251  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:35.276304  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.280057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.284007  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:35.284080  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:35.310830  306747 cri.go:89] found id: ""
	I1017 19:27:35.310909  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.310932  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:35.310955  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:35.311062  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:35.354572  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:35.354597  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:35.354602  306747 cri.go:89] found id: ""
	I1017 19:27:35.354610  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:35.354666  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.358450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.361871  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:35.361942  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:35.389041  306747 cri.go:89] found id: ""
	I1017 19:27:35.389065  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.389073  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:35.389079  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:35.389137  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:35.415942  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:35.415967  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:35.415972  306747 cri.go:89] found id: ""
	I1017 19:27:35.415980  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:35.416037  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.419700  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.423643  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:35.423765  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:35.450381  306747 cri.go:89] found id: ""
	I1017 19:27:35.450404  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.450413  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:35.450422  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:35.450435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:35.478252  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:35.478280  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:35.522590  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:35.522623  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:35.578335  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:35.578372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:35.613061  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:35.613091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:35.638492  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:35.638520  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:35.722854  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:35.722891  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:35.757639  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:35.757672  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:35.863697  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:35.863735  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:35.940574  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:35.932704    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.933394    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935016    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935464    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.936965    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:35.932704    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.933394    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935016    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935464    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.936965    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:35.940597  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:35.940610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:35.976992  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:35.977024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:36.004857  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:36.004894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:38.527370  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:38.538426  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:38.538499  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:38.564462  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:38.564484  306747 cri.go:89] found id: ""
	I1017 19:27:38.564504  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:38.564583  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.568393  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:38.568469  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:38.593756  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:38.593785  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:38.593790  306747 cri.go:89] found id: ""
	I1017 19:27:38.593797  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:38.593850  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.597636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.601069  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:38.601138  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:38.628357  306747 cri.go:89] found id: ""
	I1017 19:27:38.628382  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.628391  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:38.628398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:38.628455  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:38.653998  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:38.654020  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:38.654025  306747 cri.go:89] found id: ""
	I1017 19:27:38.654033  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:38.654092  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.658000  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.661429  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:38.661500  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:38.687831  306747 cri.go:89] found id: ""
	I1017 19:27:38.687857  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.687866  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:38.687873  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:38.687939  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:38.728871  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:38.728893  306747 cri.go:89] found id: ""
	I1017 19:27:38.728902  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:38.728956  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.732553  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:38.732626  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:38.758108  306747 cri.go:89] found id: ""
	I1017 19:27:38.758131  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.758139  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:38.758149  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:38.758160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:38.856927  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:38.857005  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:38.875545  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:38.875575  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:38.948879  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:38.941082    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.941735    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943798    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.945334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:38.941082    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.941735    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943798    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.945334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:38.948901  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:38.948914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:38.997335  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:38.997372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:39.029015  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:39.029043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:39.108011  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:39.108046  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:39.141940  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:39.141971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:39.170446  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:39.170472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:39.208445  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:39.208481  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:39.272902  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:39.272952  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:41.807281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:41.817677  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:41.817808  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:41.847030  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:41.847052  306747 cri.go:89] found id: ""
	I1017 19:27:41.847060  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:41.847141  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.856702  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:41.856768  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:41.882291  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:41.882314  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:41.882320  306747 cri.go:89] found id: ""
	I1017 19:27:41.882337  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:41.882441  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.886489  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.896574  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:41.896698  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:41.922724  306747 cri.go:89] found id: ""
	I1017 19:27:41.922748  306747 logs.go:282] 0 containers: []
	W1017 19:27:41.922757  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:41.922763  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:41.922817  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:41.948998  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:41.949024  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:41.949030  306747 cri.go:89] found id: ""
	I1017 19:27:41.949038  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:41.949090  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.961165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.965546  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:41.965617  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:41.994892  306747 cri.go:89] found id: ""
	I1017 19:27:41.994917  306747 logs.go:282] 0 containers: []
	W1017 19:27:41.994935  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:41.994943  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:41.995002  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:42.028588  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:42.028626  306747 cri.go:89] found id: ""
	I1017 19:27:42.028636  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:42.028712  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:42.035671  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:42.035764  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:42.067030  306747 cri.go:89] found id: ""
	I1017 19:27:42.067061  306747 logs.go:282] 0 containers: []
	W1017 19:27:42.067072  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:42.067081  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:42.067105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:42.109133  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:42.109175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:42.199861  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:42.199955  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:42.342289  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:42.342335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:42.363849  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:42.363906  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:42.441824  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:42.432639    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.433836    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.434718    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436054    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436745    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:42.432639    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.433836    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.434718    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436054    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436745    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:42.441858  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:42.441872  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:42.471376  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:42.471404  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:42.516923  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:42.516960  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:42.595252  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:42.595288  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:42.623727  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:42.623757  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:42.665018  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:42.665048  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:45.203111  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:45.228005  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:45.228167  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:45.284064  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:45.284089  306747 cri.go:89] found id: ""
	I1017 19:27:45.284098  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:45.284165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.293975  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:45.294167  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:45.366214  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:45.366372  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:45.366394  306747 cri.go:89] found id: ""
	I1017 19:27:45.366421  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:45.366520  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.385006  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.397052  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:45.397258  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:45.444612  306747 cri.go:89] found id: ""
	I1017 19:27:45.444689  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.444712  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:45.444737  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:45.444839  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:45.475398  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:45.475418  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:45.475422  306747 cri.go:89] found id: ""
	I1017 19:27:45.475430  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:45.475483  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.480459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.484700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:45.484826  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:45.516264  306747 cri.go:89] found id: ""
	I1017 19:27:45.516289  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.516298  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:45.516305  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:45.516385  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:45.545867  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:45.545891  306747 cri.go:89] found id: ""
	I1017 19:27:45.545900  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:45.545955  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.549781  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:45.549898  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:45.578811  306747 cri.go:89] found id: ""
	I1017 19:27:45.578837  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.578847  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:45.578857  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:45.578870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:45.605475  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:45.605507  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:45.687039  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:45.687081  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:45.755076  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:45.746538    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.747381    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749046    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749635    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.751252    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:45.746538    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.747381    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749046    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749635    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.751252    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:45.755099  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:45.755114  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:45.784001  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:45.784034  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:45.837928  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:45.837964  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:45.914633  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:45.914670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:45.950096  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:45.950123  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:46.054149  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:46.054194  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:46.072594  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:46.072628  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:46.111999  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:46.112030  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:48.642924  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:48.653451  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:48.653519  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:48.679639  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:48.679659  306747 cri.go:89] found id: ""
	I1017 19:27:48.679667  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:48.679720  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.683701  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:48.683775  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:48.711679  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:48.711701  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:48.711707  306747 cri.go:89] found id: ""
	I1017 19:27:48.711714  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:48.711767  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.715462  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.718828  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:48.718914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:48.745090  306747 cri.go:89] found id: ""
	I1017 19:27:48.745156  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.745170  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:48.745178  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:48.745236  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:48.772250  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:48.772273  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:48.772278  306747 cri.go:89] found id: ""
	I1017 19:27:48.772286  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:48.772344  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.776030  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.779386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:48.779454  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:48.805859  306747 cri.go:89] found id: ""
	I1017 19:27:48.805884  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.805893  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:48.805900  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:48.805957  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:48.831953  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:48.831975  306747 cri.go:89] found id: ""
	I1017 19:27:48.831984  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:48.832040  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.835702  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:48.835770  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:48.869137  306747 cri.go:89] found id: ""
	I1017 19:27:48.869159  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.869168  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:48.869177  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:48.869190  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:48.910676  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:48.910711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:48.972655  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:48.972690  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:49.013320  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:49.013350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:49.093756  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:49.093796  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:49.137959  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:49.137988  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:49.207174  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:49.198952    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.199631    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201291    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201757    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.203195    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:49.198952    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.199631    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201291    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201757    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.203195    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:49.207199  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:49.207215  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:49.255066  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:49.255135  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:49.283732  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:49.283760  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:49.395846  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:49.395882  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:49.414130  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:49.414161  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:51.941734  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:51.953584  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:51.953657  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:51.984051  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:51.984073  306747 cri.go:89] found id: ""
	I1017 19:27:51.984081  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:51.984225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:51.989195  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:51.989276  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:52.018264  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:52.018291  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:52.018296  306747 cri.go:89] found id: ""
	I1017 19:27:52.018305  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:52.018390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.022319  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.026112  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:52.026196  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:52.054070  306747 cri.go:89] found id: ""
	I1017 19:27:52.054097  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.054107  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:52.054114  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:52.054234  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:52.091016  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:52.091040  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:52.091045  306747 cri.go:89] found id: ""
	I1017 19:27:52.091052  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:52.091109  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.095213  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.098982  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:52.099079  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:52.126556  306747 cri.go:89] found id: ""
	I1017 19:27:52.126590  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.126601  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:52.126607  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:52.126676  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:52.158449  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:52.158473  306747 cri.go:89] found id: ""
	I1017 19:27:52.158482  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:52.158543  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.162572  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:52.162647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:52.192007  306747 cri.go:89] found id: ""
	I1017 19:27:52.192033  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.192042  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:52.192052  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:52.192066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:52.209934  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:52.209966  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:52.285387  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:52.276095    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.276908    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.278520    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.279497    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.280119    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:52.276095    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.276908    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.278520    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.279497    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.280119    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:52.285410  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:52.285426  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:52.314784  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:52.314812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:52.349858  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:52.349896  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:52.417120  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:52.417160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:52.447498  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:52.447525  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:52.525405  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:52.525442  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:52.568336  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:52.568364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:52.667592  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:52.667629  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:52.714508  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:52.714544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.241965  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:55.252843  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:55.252914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:55.281150  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:55.281173  306747 cri.go:89] found id: ""
	I1017 19:27:55.281181  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:55.281254  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.285436  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:55.285508  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:55.311561  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:55.311585  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:55.311590  306747 cri.go:89] found id: ""
	I1017 19:27:55.311598  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:55.311654  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.315303  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.318720  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:55.318789  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:55.342910  306747 cri.go:89] found id: ""
	I1017 19:27:55.342937  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.342946  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:55.342953  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:55.343012  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:55.369108  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:55.369130  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:55.369136  306747 cri.go:89] found id: ""
	I1017 19:27:55.369154  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:55.369212  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.372980  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.376499  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:55.376598  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:55.409872  306747 cri.go:89] found id: ""
	I1017 19:27:55.409898  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.409907  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:55.409914  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:55.409970  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:55.435703  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.435725  306747 cri.go:89] found id: ""
	I1017 19:27:55.435734  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:55.435787  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.439520  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:55.439587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:55.466991  306747 cri.go:89] found id: ""
	I1017 19:27:55.467017  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.467026  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:55.467036  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:55.467048  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.492985  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:55.493014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:55.566914  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:55.566950  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:55.643727  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:55.635444    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.636184    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.637061    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638074    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638650    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:55.635444    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.636184    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.637061    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638074    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638650    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:55.643796  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:55.643817  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:55.670365  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:55.670394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:55.705898  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:55.705936  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:55.732124  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:55.732152  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:55.762958  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:55.762987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:55.857491  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:55.857528  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:55.875620  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:55.875658  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:55.953454  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:55.953501  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:58.520452  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:58.530935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:58.531015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:58.557433  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:58.557455  306747 cri.go:89] found id: ""
	I1017 19:27:58.557464  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:58.557521  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.561276  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:58.561345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:58.587982  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:58.588006  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:58.588011  306747 cri.go:89] found id: ""
	I1017 19:27:58.588018  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:58.588072  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.591894  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.595410  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:58.595490  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:58.620930  306747 cri.go:89] found id: ""
	I1017 19:27:58.620956  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.620966  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:58.620972  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:58.621038  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:58.646484  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:58.646509  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:58.646514  306747 cri.go:89] found id: ""
	I1017 19:27:58.646522  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:58.646573  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.650281  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.653491  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:58.653564  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:58.679227  306747 cri.go:89] found id: ""
	I1017 19:27:58.679251  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.679261  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:58.679271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:58.679329  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:58.712878  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:58.712901  306747 cri.go:89] found id: ""
	I1017 19:27:58.712910  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:58.712965  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.717668  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:58.717744  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:58.743926  306747 cri.go:89] found id: ""
	I1017 19:27:58.743950  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.743960  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:58.743969  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:58.743981  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:58.816251  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:58.808176    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.809065    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810666    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810959    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.812492    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:58.808176    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.809065    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810666    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810959    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.812492    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:58.816275  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:58.816289  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:58.880149  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:58.880187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:58.926347  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:58.926379  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:58.959298  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:58.959326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:58.985914  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:58.985941  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:59.060169  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:59.060206  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:59.098174  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:59.098204  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:59.193263  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:59.193298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:59.223428  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:59.223461  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:59.282679  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:59.282714  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:01.802237  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:01.814388  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:01.814466  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:01.840376  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:01.840398  306747 cri.go:89] found id: ""
	I1017 19:28:01.840412  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:01.840465  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.844426  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:01.844496  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:01.873063  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:01.873085  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:01.873090  306747 cri.go:89] found id: ""
	I1017 19:28:01.873098  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:01.873155  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.877190  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.881085  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:01.881173  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:01.908701  306747 cri.go:89] found id: ""
	I1017 19:28:01.908726  306747 logs.go:282] 0 containers: []
	W1017 19:28:01.908736  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:01.908742  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:01.908799  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:01.936306  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:01.936330  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:01.936335  306747 cri.go:89] found id: ""
	I1017 19:28:01.936343  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:01.936397  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.940768  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.946060  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:01.946131  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:01.974191  306747 cri.go:89] found id: ""
	I1017 19:28:01.974217  306747 logs.go:282] 0 containers: []
	W1017 19:28:01.974227  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:01.974234  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:01.974299  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:02.003021  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:02.003047  306747 cri.go:89] found id: ""
	I1017 19:28:02.003056  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:02.003132  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:02.016728  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:02.016803  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:02.046662  306747 cri.go:89] found id: ""
	I1017 19:28:02.046688  306747 logs.go:282] 0 containers: []
	W1017 19:28:02.046697  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:02.046708  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:02.046744  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:02.076638  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:02.076670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:02.097353  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:02.097384  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:02.149812  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:02.149852  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:02.212958  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:02.212995  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:02.242664  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:02.242692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:02.329225  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:02.329262  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:02.364870  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:02.364906  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:02.472339  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:02.472377  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:02.541865  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:02.533392    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.534027    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.535792    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.536454    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.537580    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:02.533392    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.534027    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.535792    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.536454    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.537580    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:02.541887  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:02.541900  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:02.570859  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:02.570888  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.110395  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:05.121645  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:05.121716  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:05.153742  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:05.153766  306747 cri.go:89] found id: ""
	I1017 19:28:05.153775  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:05.153829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.157576  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:05.157647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:05.184788  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:05.184810  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.184815  306747 cri.go:89] found id: ""
	I1017 19:28:05.184823  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:05.184878  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.188586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.192151  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:05.192222  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:05.222405  306747 cri.go:89] found id: ""
	I1017 19:28:05.222437  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.222447  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:05.222453  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:05.222512  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:05.251383  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:05.251408  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:05.251413  306747 cri.go:89] found id: ""
	I1017 19:28:05.251421  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:05.251474  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.255443  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.258903  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:05.258971  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:05.289906  306747 cri.go:89] found id: ""
	I1017 19:28:05.289983  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.289999  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:05.290007  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:05.290065  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:05.317057  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:05.317122  306747 cri.go:89] found id: ""
	I1017 19:28:05.317136  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:05.317202  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.320997  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:05.321071  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:05.350310  306747 cri.go:89] found id: ""
	I1017 19:28:05.350335  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.350344  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:05.350353  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:05.350364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:05.387607  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:05.387637  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:05.456949  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:05.448355    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.449098    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.450777    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.451358    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.452970    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:05.448355    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.449098    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.450777    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.451358    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.452970    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:05.457018  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:05.457045  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:05.484064  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:05.484139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:05.543816  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:05.543851  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:05.573032  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:05.573058  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:05.651816  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:05.651853  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:05.753730  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:05.753765  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:05.772288  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:05.772320  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:05.827946  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:05.827982  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.872696  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:05.872731  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.406970  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:08.417284  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:08.417352  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:08.443772  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:08.443796  306747 cri.go:89] found id: ""
	I1017 19:28:08.443815  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:08.443868  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.447541  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:08.447633  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:08.472976  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:08.473004  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:08.473009  306747 cri.go:89] found id: ""
	I1017 19:28:08.473017  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:08.473070  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.476664  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.480025  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:08.480095  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:08.507100  306747 cri.go:89] found id: ""
	I1017 19:28:08.507122  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.507130  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:08.507136  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:08.507194  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:08.532864  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:08.532888  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.532895  306747 cri.go:89] found id: ""
	I1017 19:28:08.532912  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:08.532966  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.536602  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.540037  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:08.540108  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:08.566233  306747 cri.go:89] found id: ""
	I1017 19:28:08.566258  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.566267  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:08.566273  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:08.566348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:08.593545  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:08.593568  306747 cri.go:89] found id: ""
	I1017 19:28:08.593577  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:08.593630  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.597170  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:08.597251  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:08.622805  306747 cri.go:89] found id: ""
	I1017 19:28:08.622829  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.622838  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:08.622847  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:08.622886  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:08.718117  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:08.718158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:08.736317  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:08.736358  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:08.785165  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:08.785200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.813123  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:08.813154  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:08.842670  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:08.842698  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:08.883049  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:08.883081  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:08.948658  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:08.940826    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.941602    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943150    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943452    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.944921    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:08.940826    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.941602    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943150    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943452    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.944921    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:08.948680  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:08.948693  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:08.975235  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:08.975261  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:09.023572  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:09.023607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:09.085674  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:09.085713  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:11.674341  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:11.684867  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:11.684937  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:11.710235  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:11.710258  306747 cri.go:89] found id: ""
	I1017 19:28:11.710266  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:11.710317  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.713823  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:11.713893  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:11.743536  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:11.743557  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:11.743564  306747 cri.go:89] found id: ""
	I1017 19:28:11.743571  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:11.743623  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.747225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.750360  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:11.750423  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:11.775489  306747 cri.go:89] found id: ""
	I1017 19:28:11.775553  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.775575  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:11.775599  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:11.775689  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:11.804973  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:11.804993  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:11.804999  306747 cri.go:89] found id: ""
	I1017 19:28:11.805007  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:11.805064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.809085  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.812425  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:11.812493  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:11.839019  306747 cri.go:89] found id: ""
	I1017 19:28:11.839042  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.839051  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:11.839057  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:11.839113  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:11.867946  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:11.868012  306747 cri.go:89] found id: ""
	I1017 19:28:11.868036  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:11.868125  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.871735  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:11.871847  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:11.917369  306747 cri.go:89] found id: ""
	I1017 19:28:11.917435  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.917448  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:11.917458  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:11.917473  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:12.015837  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:12.015876  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:12.037612  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:12.037645  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:12.066665  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:12.066695  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:12.124283  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:12.124321  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:12.157456  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:12.157487  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:12.218566  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:12.218603  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:12.246576  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:12.246601  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:12.323228  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:12.323263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:12.389358  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:12.381335    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.382085    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.383576    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.384016    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.385432    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:12.381335    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.382085    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.383576    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.384016    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.385432    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:12.389381  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:12.389394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:12.420218  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:12.420248  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:14.967518  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:14.978398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:14.978489  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:15.008833  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:15.008861  306747 cri.go:89] found id: ""
	I1017 19:28:15.008869  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:15.008962  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.019024  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:15.019115  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:15.048619  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:15.048641  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:15.048646  306747 cri.go:89] found id: ""
	I1017 19:28:15.048653  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:15.048711  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.052829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.056849  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:15.056960  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:15.090614  306747 cri.go:89] found id: ""
	I1017 19:28:15.090646  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.090670  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:15.090679  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:15.090755  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:15.121287  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:15.121354  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:15.121367  306747 cri.go:89] found id: ""
	I1017 19:28:15.121376  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:15.121441  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.126749  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.130705  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:15.130786  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:15.158437  306747 cri.go:89] found id: ""
	I1017 19:28:15.158462  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.158472  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:15.158479  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:15.158542  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:15.187795  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:15.187819  306747 cri.go:89] found id: ""
	I1017 19:28:15.187828  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:15.187885  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.191939  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:15.192014  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:15.221830  306747 cri.go:89] found id: ""
	I1017 19:28:15.221856  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.221866  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:15.221875  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:15.221886  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:15.314949  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:15.314983  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:15.334443  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:15.334524  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:15.391124  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:15.391159  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:15.464757  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:15.464794  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:15.499089  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:15.499118  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:15.572721  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:15.572758  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:15.604780  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:15.604809  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:15.673978  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:15.665870    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.666574    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668276    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668888    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.670272    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:15.665870    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.666574    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668276    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668888    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.670272    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:15.674001  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:15.674014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:15.703550  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:15.703577  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:15.736137  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:15.736167  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.272459  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:18.284130  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:18.284202  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:18.317045  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:18.317114  306747 cri.go:89] found id: ""
	I1017 19:28:18.317140  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:18.317200  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.320946  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:18.321021  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:18.349966  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:18.350047  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:18.350069  306747 cri.go:89] found id: ""
	I1017 19:28:18.350078  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:18.350146  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.354094  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.357736  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:18.357840  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:18.389890  306747 cri.go:89] found id: ""
	I1017 19:28:18.389914  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.389923  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:18.389929  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:18.389990  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:18.416552  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:18.416573  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.416577  306747 cri.go:89] found id: ""
	I1017 19:28:18.416584  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:18.416636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.421408  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.425021  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:18.425127  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:18.451716  306747 cri.go:89] found id: ""
	I1017 19:28:18.451744  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.451754  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:18.451760  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:18.451824  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:18.486286  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:18.486355  306747 cri.go:89] found id: ""
	I1017 19:28:18.486370  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:18.486424  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.490097  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:18.490214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:18.517834  306747 cri.go:89] found id: ""
	I1017 19:28:18.517859  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.517868  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:18.517877  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:18.517907  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:18.569373  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:18.569412  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:18.597414  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:18.597442  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:18.615623  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:18.615651  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:18.687384  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:18.679364    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.680188    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.681715    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.682200    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.683729    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:18.679364    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.680188    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.681715    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.682200    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.683729    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:18.687406  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:18.687420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:18.724107  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:18.724135  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:18.757798  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:18.757832  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:18.823518  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:18.823556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.868332  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:18.868358  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:18.948355  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:18.948391  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:18.980022  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:18.980052  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:21.580647  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:21.591760  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:21.591828  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:21.619734  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:21.619755  306747 cri.go:89] found id: ""
	I1017 19:28:21.619763  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:21.619822  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.623634  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:21.623706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:21.650174  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:21.650202  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:21.650207  306747 cri.go:89] found id: ""
	I1017 19:28:21.650215  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:21.650275  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.654337  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.658320  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:21.658390  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:21.685562  306747 cri.go:89] found id: ""
	I1017 19:28:21.685587  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.685596  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:21.685602  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:21.685696  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:21.711151  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:21.711175  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:21.711180  306747 cri.go:89] found id: ""
	I1017 19:28:21.711188  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:21.711241  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.714981  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.718517  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:21.718587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:21.745770  306747 cri.go:89] found id: ""
	I1017 19:28:21.745796  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.745805  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:21.745812  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:21.745872  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:21.773020  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:21.773042  306747 cri.go:89] found id: ""
	I1017 19:28:21.773052  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:21.773107  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.776980  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:21.777073  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:21.805110  306747 cri.go:89] found id: ""
	I1017 19:28:21.805137  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.805146  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:21.805156  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:21.805187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:21.915295  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:21.915339  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:21.934521  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:21.934553  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:21.971829  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:21.971867  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:22.032460  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:22.032500  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:22.069813  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:22.069901  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:22.150515  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:22.150553  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:22.186817  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:22.186843  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:22.250982  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:22.242783    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.243418    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.244975    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.245572    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.247184    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:22.242783    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.243418    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.244975    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.245572    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.247184    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:22.251005  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:22.251019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:22.318367  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:22.318403  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:22.359962  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:22.359991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:24.888496  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:24.899632  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:24.899701  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:24.927106  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:24.927126  306747 cri.go:89] found id: ""
	I1017 19:28:24.927135  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:24.927191  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.930789  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:24.930901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:24.957962  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:24.957986  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:24.957992  306747 cri.go:89] found id: ""
	I1017 19:28:24.958000  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:24.958052  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.961689  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.965312  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:24.965388  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:24.999567  306747 cri.go:89] found id: ""
	I1017 19:28:24.999646  306747 logs.go:282] 0 containers: []
	W1017 19:28:24.999670  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:24.999692  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:24.999784  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:25.030377  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:25.030447  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:25.030466  306747 cri.go:89] found id: ""
	I1017 19:28:25.030493  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:25.030587  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.034492  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.038213  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:25.038307  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:25.064926  306747 cri.go:89] found id: ""
	I1017 19:28:25.065005  306747 logs.go:282] 0 containers: []
	W1017 19:28:25.065022  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:25.065029  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:25.065092  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:25.104761  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:25.104835  306747 cri.go:89] found id: ""
	I1017 19:28:25.104851  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:25.104908  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.109062  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:25.109153  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:25.137891  306747 cri.go:89] found id: ""
	I1017 19:28:25.137923  306747 logs.go:282] 0 containers: []
	W1017 19:28:25.137931  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:25.137940  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:25.137953  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:25.170975  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:25.171007  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:25.204002  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:25.204031  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:25.297840  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:25.297914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:25.315642  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:25.315682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:25.369974  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:25.370011  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:25.452713  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:25.452749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:25.483409  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:25.483439  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:25.558385  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:25.550412    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.551034    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.552731    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.553294    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.554883    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:25.550412    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.551034    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.552731    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.553294    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.554883    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:25.558408  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:25.558421  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:25.585961  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:25.585989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:25.617689  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:25.617720  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.181797  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:28.193078  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:28.193193  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:28.220858  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:28.220880  306747 cri.go:89] found id: ""
	I1017 19:28:28.220889  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:28.220949  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.224889  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:28.224962  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:28.256761  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:28.256782  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:28.256787  306747 cri.go:89] found id: ""
	I1017 19:28:28.256795  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:28.256849  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.261049  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.264952  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:28.265076  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:28.291441  306747 cri.go:89] found id: ""
	I1017 19:28:28.291509  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.291533  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:28.291556  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:28.291641  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:28.318704  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.318768  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:28.318790  306747 cri.go:89] found id: ""
	I1017 19:28:28.318815  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:28.318904  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.323349  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.327034  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:28.327096  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:28.357958  306747 cri.go:89] found id: ""
	I1017 19:28:28.357983  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.357992  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:28.358001  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:28.358059  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:28.384163  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:28.384187  306747 cri.go:89] found id: ""
	I1017 19:28:28.384196  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:28.384262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.387976  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:28.388088  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:28.414600  306747 cri.go:89] found id: ""
	I1017 19:28:28.414625  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.414635  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:28.414644  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:28.414655  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:28.478712  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:28.469484    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.470334    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.472333    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.473060    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.474868    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:28.469484    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.470334    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.472333    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.473060    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.474868    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:28.478736  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:28.478749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:28.504392  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:28.504432  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.566111  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:28.566147  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:28.597513  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:28.597544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:28.676314  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:28.676352  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:28.779140  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:28.779181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:28.830823  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:28.830858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:28.873192  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:28.873224  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:28.907594  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:28.907621  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:28.939159  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:28.939188  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:31.457173  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:31.468390  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:31.468462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:31.500159  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:31.500183  306747 cri.go:89] found id: ""
	I1017 19:28:31.500191  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:31.500245  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.503981  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:31.504051  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:31.529707  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:31.529735  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:31.529740  306747 cri.go:89] found id: ""
	I1017 19:28:31.529748  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:31.529810  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.533478  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.536973  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:31.537042  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:31.562894  306747 cri.go:89] found id: ""
	I1017 19:28:31.562920  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.562929  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:31.562936  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:31.562996  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:31.591920  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:31.591943  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:31.591949  306747 cri.go:89] found id: ""
	I1017 19:28:31.591956  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:31.592011  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.595596  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.598999  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:31.599093  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:31.631142  306747 cri.go:89] found id: ""
	I1017 19:28:31.631164  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.631173  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:31.631179  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:31.631264  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:31.657995  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:31.658017  306747 cri.go:89] found id: ""
	I1017 19:28:31.658026  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:31.658077  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.661797  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:31.661866  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:31.687995  306747 cri.go:89] found id: ""
	I1017 19:28:31.688019  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.688028  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:31.688037  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:31.688049  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:31.714258  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:31.714288  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:31.743480  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:31.743510  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:31.839126  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:31.839165  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:31.865944  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:31.865971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:31.923800  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:31.923834  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:32.015198  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:32.015258  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:32.108618  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:32.108656  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:32.127026  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:32.127056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:32.197465  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:32.189288    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.190038    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191643    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191956    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.193464    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:32.189288    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.190038    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191643    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191956    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.193464    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:32.197487  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:32.197501  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:32.230297  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:32.230333  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:34.763313  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:34.773938  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:34.774008  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:34.801473  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:34.801491  306747 cri.go:89] found id: ""
	I1017 19:28:34.801498  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:34.801568  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.805380  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:34.805451  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:34.831939  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:34.831964  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:34.831968  306747 cri.go:89] found id: ""
	I1017 19:28:34.831976  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:34.832034  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.836223  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.839881  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:34.839985  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:34.867700  306747 cri.go:89] found id: ""
	I1017 19:28:34.867725  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.867735  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:34.867741  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:34.867826  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:34.898720  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:34.898743  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:34.898748  306747 cri.go:89] found id: ""
	I1017 19:28:34.898756  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:34.898827  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.902459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.905896  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:34.905974  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:34.933166  306747 cri.go:89] found id: ""
	I1017 19:28:34.933242  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.933258  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:34.933266  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:34.933326  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:34.961978  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:34.962067  306747 cri.go:89] found id: ""
	I1017 19:28:34.962091  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:34.962173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.966069  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:34.966147  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:34.993526  306747 cri.go:89] found id: ""
	I1017 19:28:34.993565  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.993574  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:34.993583  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:34.993594  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:35.023086  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:35.023173  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:35.057614  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:35.057652  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:35.126909  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:35.126944  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:35.207646  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:35.207681  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:35.240791  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:35.240824  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:35.259253  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:35.259285  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:35.327544  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:35.319793    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.320443    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.321977    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.322405    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.323890    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:35.319793    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.320443    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.321977    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.322405    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.323890    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:35.327566  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:35.327579  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:35.377112  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:35.377150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:35.405892  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:35.405920  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:35.431201  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:35.431230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:38.030766  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:38.042946  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:38.043015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:38.074181  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:38.074215  306747 cri.go:89] found id: ""
	I1017 19:28:38.074224  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:38.074287  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.079011  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:38.079083  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:38.108493  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:38.108592  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:38.108612  306747 cri.go:89] found id: ""
	I1017 19:28:38.108636  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:38.108721  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.112489  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.115918  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:38.116030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:38.146192  306747 cri.go:89] found id: ""
	I1017 19:28:38.146215  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.146225  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:38.146233  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:38.146315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:38.178299  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:38.178363  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:38.178375  306747 cri.go:89] found id: ""
	I1017 19:28:38.178382  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:38.178438  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.182144  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.185723  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:38.185785  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:38.210486  306747 cri.go:89] found id: ""
	I1017 19:28:38.210509  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.210518  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:38.210524  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:38.210578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:38.240550  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:38.240573  306747 cri.go:89] found id: ""
	I1017 19:28:38.240581  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:38.240633  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.246616  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:38.246710  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:38.272684  306747 cri.go:89] found id: ""
	I1017 19:28:38.272710  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.272719  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:38.272728  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:38.272759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:38.291309  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:38.291338  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:38.362093  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:38.354481    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.355177    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.356720    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.357017    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.358292    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:38.354481    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.355177    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.356720    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.357017    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.358292    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:38.362115  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:38.362136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:38.388487  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:38.388541  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:38.460507  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:38.460545  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:38.493438  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:38.493472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:38.519348  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:38.519378  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:38.547771  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:38.547800  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:38.646739  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:38.646779  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:38.711727  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:38.711765  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:38.794605  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:38.794645  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:41.329100  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:41.340102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:41.340191  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:41.378237  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:41.378304  306747 cri.go:89] found id: ""
	I1017 19:28:41.378327  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:41.378411  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.382295  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:41.382433  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:41.413432  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:41.413454  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:41.413459  306747 cri.go:89] found id: ""
	I1017 19:28:41.413483  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:41.413541  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.417349  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.420940  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:41.421030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:41.447730  306747 cri.go:89] found id: ""
	I1017 19:28:41.447754  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.447763  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:41.447769  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:41.447917  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:41.473491  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:41.473514  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:41.473520  306747 cri.go:89] found id: ""
	I1017 19:28:41.473527  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:41.473602  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.477615  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.481139  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:41.481211  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:41.507258  306747 cri.go:89] found id: ""
	I1017 19:28:41.507283  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.507292  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:41.507300  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:41.507356  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:41.537051  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:41.537073  306747 cri.go:89] found id: ""
	I1017 19:28:41.537082  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:41.537134  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.540852  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:41.540920  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:41.567361  306747 cri.go:89] found id: ""
	I1017 19:28:41.567389  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.567398  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:41.567407  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:41.567419  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:41.599142  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:41.599172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:41.635743  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:41.635773  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:41.654302  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:41.654331  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:41.717143  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:41.717179  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:41.792345  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:41.792380  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:41.871479  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:41.871517  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:41.975433  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:41.975512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:42.054059  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:42.044191    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.045351    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.046050    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.047965    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.048651    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:42.044191    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.045351    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.046050    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.047965    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.048651    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:42.054083  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:42.054106  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:42.089914  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:42.089944  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:42.149148  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:42.149200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:44.709425  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:44.719908  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:44.719977  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:44.763510  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:44.763534  306747 cri.go:89] found id: ""
	I1017 19:28:44.763541  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:44.763594  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.767241  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:44.767313  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:44.795651  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:44.795675  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:44.795681  306747 cri.go:89] found id: ""
	I1017 19:28:44.795689  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:44.795742  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.800272  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.804452  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:44.804565  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:44.839339  306747 cri.go:89] found id: ""
	I1017 19:28:44.839371  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.839379  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:44.839386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:44.839452  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:44.875066  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:44.875099  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:44.875105  306747 cri.go:89] found id: ""
	I1017 19:28:44.875139  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:44.875214  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.880309  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.883914  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:44.884020  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:44.917517  306747 cri.go:89] found id: ""
	I1017 19:28:44.917586  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.917614  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:44.917638  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:44.917727  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:44.946317  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:44.946393  306747 cri.go:89] found id: ""
	I1017 19:28:44.946416  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:44.946496  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.950194  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:44.950311  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:44.976935  306747 cri.go:89] found id: ""
	I1017 19:28:44.977000  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.977027  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:44.977054  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:44.977071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:45.083362  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:45.083465  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:45.185240  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:45.174155    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.175051    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.176949    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178114    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178917    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:45.174155    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.175051    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.176949    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178114    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178917    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:45.185281  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:45.185298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:45.229219  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:45.229247  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:45.303101  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:45.303141  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:45.395057  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:45.395208  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:45.422882  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:45.422938  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:45.465002  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:45.465035  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:45.501568  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:45.501600  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:45.530952  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:45.530983  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:45.610519  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:45.610560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:48.146542  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:48.158014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:48.158095  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:48.185610  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:48.185676  306747 cri.go:89] found id: ""
	I1017 19:28:48.185699  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:48.185773  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.189874  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:48.189975  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:48.216931  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:48.216997  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:48.217020  306747 cri.go:89] found id: ""
	I1017 19:28:48.217044  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:48.217112  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.220961  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.224622  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:48.224715  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:48.254633  306747 cri.go:89] found id: ""
	I1017 19:28:48.254660  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.254669  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:48.254676  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:48.254759  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:48.280918  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:48.280996  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:48.281017  306747 cri.go:89] found id: ""
	I1017 19:28:48.281033  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:48.281101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.285444  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.289246  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:48.289369  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:48.317150  306747 cri.go:89] found id: ""
	I1017 19:28:48.317216  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.317244  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:48.317275  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:48.317350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:48.347609  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:48.347643  306747 cri.go:89] found id: ""
	I1017 19:28:48.347652  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:48.347704  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.351509  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:48.351584  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:48.376680  306747 cri.go:89] found id: ""
	I1017 19:28:48.376708  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.376716  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:48.376726  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:48.376738  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:48.452752  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:48.452788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:48.484352  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:48.484382  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:48.510315  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:48.510344  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:48.571544  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:48.571578  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:48.609922  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:48.609951  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:48.642129  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:48.642158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:48.737103  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:48.737139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:48.755251  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:48.755324  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:48.826596  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:48.817740    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.818885    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.819683    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.820717    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.821339    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:48.817740    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.818885    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.819683    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.820717    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.821339    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:48.826621  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:48.826676  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:48.917412  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:48.917447  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.447884  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:51.458905  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:51.458975  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:51.486341  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:51.486364  306747 cri.go:89] found id: ""
	I1017 19:28:51.486373  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:51.486435  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.490132  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:51.490214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:51.515926  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:51.515950  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:51.515956  306747 cri.go:89] found id: ""
	I1017 19:28:51.515964  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:51.516033  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.520421  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.524078  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:51.524150  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:51.558659  306747 cri.go:89] found id: ""
	I1017 19:28:51.558683  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.558693  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:51.558700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:51.558754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:51.584326  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:51.584349  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:51.584355  306747 cri.go:89] found id: ""
	I1017 19:28:51.584362  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:51.584417  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.588059  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.591616  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:51.591692  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:51.621537  306747 cri.go:89] found id: ""
	I1017 19:28:51.621562  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.621571  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:51.621577  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:51.621634  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:51.648966  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.648994  306747 cri.go:89] found id: ""
	I1017 19:28:51.649002  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:51.649064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.652867  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:51.652934  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:51.685921  306747 cri.go:89] found id: ""
	I1017 19:28:51.685944  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.685953  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:51.685962  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:51.685973  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:51.759988  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:51.760023  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:51.846069  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:51.835717    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.836264    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.837776    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.840665    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.841647    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:51.835717    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.836264    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.837776    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.840665    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.841647    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:51.846090  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:51.846105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.875253  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:51.875281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:51.929449  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:51.929478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:52.036309  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:52.036348  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:52.054743  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:52.054772  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:52.088833  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:52.088860  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:52.157298  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:52.157332  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:52.199361  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:52.199392  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:52.268239  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:52.268286  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:54.799369  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:54.809961  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:54.810031  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:54.836137  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:54.836157  306747 cri.go:89] found id: ""
	I1017 19:28:54.836167  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:54.836220  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.839841  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:54.839912  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:54.873358  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:54.873379  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:54.873383  306747 cri.go:89] found id: ""
	I1017 19:28:54.873391  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:54.873445  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.877284  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.881090  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:54.881164  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:54.908431  306747 cri.go:89] found id: ""
	I1017 19:28:54.908456  306747 logs.go:282] 0 containers: []
	W1017 19:28:54.908465  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:54.908471  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:54.908607  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:54.935825  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:54.935845  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:54.935850  306747 cri.go:89] found id: ""
	I1017 19:28:54.935857  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:54.935913  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.939621  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.943502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:54.943577  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:54.973718  306747 cri.go:89] found id: ""
	I1017 19:28:54.973742  306747 logs.go:282] 0 containers: []
	W1017 19:28:54.973751  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:54.973757  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:54.973818  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:55.004781  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:55.004802  306747 cri.go:89] found id: ""
	I1017 19:28:55.004818  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:55.004885  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:55.015050  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:55.015136  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:55.043899  306747 cri.go:89] found id: ""
	I1017 19:28:55.043966  306747 logs.go:282] 0 containers: []
	W1017 19:28:55.043988  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:55.044013  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:55.044056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:55.097224  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:55.097263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:55.126143  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:55.126175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:55.170272  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:55.170302  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:55.190816  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:55.190846  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:55.229778  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:55.229815  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:55.296882  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:55.296954  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:55.322920  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:55.322960  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:55.398513  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:55.398549  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:55.499678  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:55.499714  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:55.563984  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:55.555178    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.556013    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.557806    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.558580    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.560270    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:55.555178    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.556013    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.557806    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.558580    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.560270    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:55.564010  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:55.564024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.090313  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:58.101520  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:58.101590  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:58.135133  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.135155  306747 cri.go:89] found id: ""
	I1017 19:28:58.135165  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:58.135217  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.139309  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:58.139381  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:58.166722  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:58.166743  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:58.166749  306747 cri.go:89] found id: ""
	I1017 19:28:58.166757  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:58.166829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.170644  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.174541  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:58.174614  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:58.200707  306747 cri.go:89] found id: ""
	I1017 19:28:58.200733  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.200741  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:58.200748  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:58.200802  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:58.227069  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:58.227090  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:58.227095  306747 cri.go:89] found id: ""
	I1017 19:28:58.227102  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:58.227153  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.230793  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.234187  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:58.234268  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:58.260228  306747 cri.go:89] found id: ""
	I1017 19:28:58.260255  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.260264  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:58.260271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:58.260330  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:58.287560  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:58.287582  306747 cri.go:89] found id: ""
	I1017 19:28:58.287590  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:58.287642  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.291431  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:58.291498  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:58.319091  306747 cri.go:89] found id: ""
	I1017 19:28:58.319116  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.319125  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:58.319133  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:58.319144  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:58.357128  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:58.357156  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:58.457940  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:58.457987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:58.477285  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:58.477363  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:58.553846  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:58.545334    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.546110    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.547791    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.548153    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.549602    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:58.545334    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.546110    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.547791    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.548153    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.549602    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:58.553942  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:58.553987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.588733  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:58.588806  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:58.615167  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:58.615234  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:58.668448  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:58.668480  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:58.701507  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:58.701539  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:58.772475  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:58.772512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:58.800891  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:58.800921  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:01.380664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:01.397862  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:01.397929  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:01.438317  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:01.438341  306747 cri.go:89] found id: ""
	I1017 19:29:01.438349  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:01.438408  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.448585  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:01.448665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:01.480947  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:01.480971  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:01.480978  306747 cri.go:89] found id: ""
	I1017 19:29:01.480985  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:01.481040  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.488101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.493426  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:01.493541  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:01.529725  306747 cri.go:89] found id: ""
	I1017 19:29:01.529759  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.529767  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:01.529803  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:01.529888  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:01.570078  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:01.570130  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:01.570162  306747 cri.go:89] found id: ""
	I1017 19:29:01.570347  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:01.570572  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.580262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.584761  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:01.584865  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:01.619278  306747 cri.go:89] found id: ""
	I1017 19:29:01.619316  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.619326  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:01.619460  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:01.619709  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:01.668374  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:01.668398  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:01.668404  306747 cri.go:89] found id: ""
	I1017 19:29:01.668411  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:01.668500  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.672629  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.676472  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:01.676559  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:01.718877  306747 cri.go:89] found id: ""
	I1017 19:29:01.718901  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.718911  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:01.718979  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:01.719003  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:01.786370  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:01.786448  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:01.835925  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:01.836009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:01.936969  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:01.937000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:01.985828  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:01.985857  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:02.036057  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:02.036090  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:02.088571  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:02.088600  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:02.183054  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:02.174539    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.175524    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177270    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177576    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.179060    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:02.174539    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.175524    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177270    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177576    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.179060    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:02.183078  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:02.183094  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:02.214988  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:02.215019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:02.246207  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:02.246238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:02.338642  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:02.338682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:02.473356  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:02.473435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:04.994292  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:05.005817  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:05.005900  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:05.038175  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:05.038208  306747 cri.go:89] found id: ""
	I1017 19:29:05.038217  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:05.038276  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.042122  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:05.042193  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:05.072245  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:05.072271  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:05.072277  306747 cri.go:89] found id: ""
	I1017 19:29:05.072290  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:05.072369  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.085415  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.089790  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:05.089901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:05.126026  306747 cri.go:89] found id: ""
	I1017 19:29:05.126051  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.126059  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:05.126065  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:05.126129  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:05.157653  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:05.157689  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:05.157694  306747 cri.go:89] found id: ""
	I1017 19:29:05.157708  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:05.157780  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.162134  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.166047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:05.166134  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:05.201222  306747 cri.go:89] found id: ""
	I1017 19:29:05.201247  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.201266  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:05.201291  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:05.201364  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:05.228323  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:05.228343  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:05.228348  306747 cri.go:89] found id: ""
	I1017 19:29:05.228355  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:05.228413  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.232758  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.236321  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:05.236407  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:05.264094  306747 cri.go:89] found id: ""
	I1017 19:29:05.264119  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.264128  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:05.264137  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:05.264150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:05.289719  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:05.289749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:05.341596  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:05.341632  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:05.385650  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:05.385681  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:05.455993  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:05.456032  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:05.482902  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:05.482967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:05.561357  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:05.561393  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:05.662914  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:05.662948  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:05.681986  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:05.682019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:05.709932  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:05.709959  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:05.745521  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:05.745548  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:05.780007  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:05.780039  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:05.861169  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:05.844357    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.845194    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.846708    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.847144    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.849138    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:05.844357    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.845194    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.846708    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.847144    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.849138    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:08.361828  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:08.372509  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:08.372609  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:08.398614  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:08.398638  306747 cri.go:89] found id: ""
	I1017 19:29:08.398646  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:08.398707  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.402221  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:08.402294  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:08.426256  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:08.426278  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:08.426284  306747 cri.go:89] found id: ""
	I1017 19:29:08.426291  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:08.426341  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.429916  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.433518  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:08.433587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:08.460461  306747 cri.go:89] found id: ""
	I1017 19:29:08.460487  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.460495  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:08.460502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:08.460591  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:08.488509  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:08.488562  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:08.488568  306747 cri.go:89] found id: ""
	I1017 19:29:08.488576  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:08.488628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.492158  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.495581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:08.495647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:08.524899  306747 cri.go:89] found id: ""
	I1017 19:29:08.524920  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.524928  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:08.524934  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:08.524997  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:08.552958  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:08.552979  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:08.552984  306747 cri.go:89] found id: ""
	I1017 19:29:08.552991  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:08.553045  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.557091  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.560618  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:08.560683  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:08.587418  306747 cri.go:89] found id: ""
	I1017 19:29:08.587495  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.587517  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:08.587557  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:08.587586  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:08.617740  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:08.617768  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:08.691709  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:08.691747  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:08.710175  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:08.710209  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:08.777270  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:08.777305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:08.810729  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:08.810754  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:08.861497  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:08.861524  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:08.964232  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:08.964270  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:09.042894  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:09.034262    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.034773    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.036444    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.037159    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.038877    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:09.034262    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.034773    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.036444    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.037159    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.038877    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:09.042916  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:09.042941  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:09.067822  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:09.067849  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:09.107723  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:09.107755  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:09.186115  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:09.186151  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:11.716134  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:11.726531  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:11.726597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:11.752711  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:11.752733  306747 cri.go:89] found id: ""
	I1017 19:29:11.752741  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:11.752795  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.756278  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:11.756366  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:11.786396  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:11.786424  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:11.786430  306747 cri.go:89] found id: ""
	I1017 19:29:11.786439  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:11.786523  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.790327  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.794284  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:11.794350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:11.826413  306747 cri.go:89] found id: ""
	I1017 19:29:11.826437  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.826446  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:11.826452  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:11.826507  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:11.861782  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:11.861855  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:11.861875  306747 cri.go:89] found id: ""
	I1017 19:29:11.861900  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:11.861986  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.866376  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.870040  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:11.870106  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:11.902703  306747 cri.go:89] found id: ""
	I1017 19:29:11.902725  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.902739  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:11.902745  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:11.902803  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:11.932072  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:11.932141  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:11.932161  306747 cri.go:89] found id: ""
	I1017 19:29:11.932186  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:11.932273  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.935981  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.939489  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:11.939560  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:11.975511  306747 cri.go:89] found id: ""
	I1017 19:29:11.975535  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.975544  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:11.975553  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:11.975565  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:12.003072  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:12.003107  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:12.038364  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:12.038400  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:12.116412  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:12.116450  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:12.147738  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:12.147766  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:12.245018  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:12.245053  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:12.262566  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:12.262641  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:12.312750  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:12.312785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:12.349963  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:12.349991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:12.419426  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:12.411356    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.411861    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.413495    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.414181    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.415507    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:12.411356    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.411861    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.413495    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.414181    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.415507    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:12.419456  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:12.419472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:12.444065  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:12.444093  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:12.511165  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:12.511200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.042908  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:15.054321  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:15.054394  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:15.089860  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:15.089886  306747 cri.go:89] found id: ""
	I1017 19:29:15.089895  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:15.089951  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.093678  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:15.093788  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:15.121746  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:15.121771  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:15.121776  306747 cri.go:89] found id: ""
	I1017 19:29:15.121784  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:15.121839  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.125790  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.129470  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:15.129544  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:15.156564  306747 cri.go:89] found id: ""
	I1017 19:29:15.156591  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.156600  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:15.156606  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:15.156665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:15.189983  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:15.190010  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.190015  306747 cri.go:89] found id: ""
	I1017 19:29:15.190023  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:15.190113  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.194081  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.197983  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:15.198087  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:15.224673  306747 cri.go:89] found id: ""
	I1017 19:29:15.224701  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.224710  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:15.224716  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:15.224776  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:15.250249  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:15.250272  306747 cri.go:89] found id: ""
	I1017 19:29:15.250280  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:15.250336  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.254014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:15.254080  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:15.281235  306747 cri.go:89] found id: ""
	I1017 19:29:15.281313  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.281337  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:15.281363  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:15.281395  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:15.385553  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:15.385599  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:15.411962  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:15.411991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:15.455045  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:15.455073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:15.527131  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:15.527170  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:15.554497  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:15.554527  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:15.587137  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:15.587164  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:15.604763  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:15.604794  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:15.679834  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:15.670121    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.670686    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672157    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672558    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.674247    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:15.670121    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.670686    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672157    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672558    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.674247    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:15.679857  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:15.679870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:15.734902  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:15.734947  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.764734  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:15.764760  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:18.342635  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:18.353361  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:18.353435  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:18.380287  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:18.380311  306747 cri.go:89] found id: ""
	I1017 19:29:18.380319  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:18.380371  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.384298  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:18.384372  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:18.410566  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:18.410585  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:18.410590  306747 cri.go:89] found id: ""
	I1017 19:29:18.410597  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:18.410651  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.414392  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.417897  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:18.417969  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:18.447960  306747 cri.go:89] found id: ""
	I1017 19:29:18.447984  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.447992  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:18.447999  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:18.448054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:18.474020  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:18.474043  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:18.474049  306747 cri.go:89] found id: ""
	I1017 19:29:18.474059  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:18.474117  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.477723  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.481031  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:18.481111  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:18.508003  306747 cri.go:89] found id: ""
	I1017 19:29:18.508026  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.508034  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:18.508040  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:18.508123  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:18.535988  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:18.536017  306747 cri.go:89] found id: ""
	I1017 19:29:18.536026  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:18.536114  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.539822  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:18.539919  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:18.565247  306747 cri.go:89] found id: ""
	I1017 19:29:18.565271  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.565279  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:18.565287  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:18.565340  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:18.590409  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:18.590435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:18.664546  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:18.664583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:18.720073  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:18.720102  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:18.818026  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:18.818065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:18.838304  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:18.838335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:18.923376  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:18.914478    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.915271    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.916962    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.917666    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.919294    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:18.914478    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.915271    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.916962    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.917666    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.919294    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:18.923400  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:18.923413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:18.958683  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:18.958723  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:18.993098  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:18.993125  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:19.020011  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:19.020054  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:19.072525  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:19.072558  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:21.648626  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:21.658854  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:21.658923  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:21.686357  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:21.686380  306747 cri.go:89] found id: ""
	I1017 19:29:21.686388  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:21.686440  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.690383  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:21.690455  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:21.716829  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:21.716849  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:21.716854  306747 cri.go:89] found id: ""
	I1017 19:29:21.716861  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:21.716918  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.720495  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.723948  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:21.724016  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:21.751438  306747 cri.go:89] found id: ""
	I1017 19:29:21.751462  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.751471  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:21.751478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:21.751540  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:21.777499  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:21.777526  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:21.777531  306747 cri.go:89] found id: ""
	I1017 19:29:21.777539  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:21.777597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.781539  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.785454  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:21.785568  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:21.816183  306747 cri.go:89] found id: ""
	I1017 19:29:21.816248  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.816270  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:21.816292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:21.816377  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:21.854603  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:21.854670  306747 cri.go:89] found id: ""
	I1017 19:29:21.854695  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:21.854779  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.860948  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:21.861028  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:21.899847  306747 cri.go:89] found id: ""
	I1017 19:29:21.899871  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.899879  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:21.899887  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:21.899899  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:21.958460  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:21.958497  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:22.040921  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:22.040958  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:22.070331  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:22.070410  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:22.149286  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:22.149326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:22.180733  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:22.180761  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:22.199492  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:22.199531  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:22.272753  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:22.265010    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.265612    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267150    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267571    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.269051    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:22.265010    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.265612    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267150    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267571    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.269051    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:22.272779  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:22.272792  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:22.299733  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:22.299761  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:22.342105  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:22.342137  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:22.369741  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:22.369780  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:24.966101  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:24.976635  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:24.976715  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:25.022230  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:25.022256  306747 cri.go:89] found id: ""
	I1017 19:29:25.022267  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:25.022330  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.026476  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:25.026548  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:25.056264  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:25.056282  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:25.056287  306747 cri.go:89] found id: ""
	I1017 19:29:25.056295  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:25.056345  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.061372  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.064965  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:25.065034  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:25.104703  306747 cri.go:89] found id: ""
	I1017 19:29:25.104725  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.104734  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:25.104739  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:25.104799  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:25.137104  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:25.137128  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:25.137134  306747 cri.go:89] found id: ""
	I1017 19:29:25.137142  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:25.137197  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.141057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.144695  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:25.144771  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:25.171838  306747 cri.go:89] found id: ""
	I1017 19:29:25.171861  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.171870  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:25.171876  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:25.171935  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:25.204227  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:25.204251  306747 cri.go:89] found id: ""
	I1017 19:29:25.204259  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:25.204312  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.208502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:25.208632  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:25.234929  306747 cri.go:89] found id: ""
	I1017 19:29:25.235003  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.235020  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:25.235030  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:25.235043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:25.272163  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:25.272192  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:25.370863  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:25.370900  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:25.411966  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:25.412009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:25.479240  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:25.479276  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:25.506577  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:25.506606  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:25.580671  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:25.580706  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:25.614033  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:25.614061  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:25.631893  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:25.631922  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:25.703391  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:25.694870    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.695646    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697219    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697740    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.699431    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:25.694870    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.695646    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697219    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697740    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.699431    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:25.703420  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:25.703449  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:25.729186  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:25.729213  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.281561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:28.292670  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:28.292764  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:28.321689  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:28.321709  306747 cri.go:89] found id: ""
	I1017 19:29:28.321718  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:28.321791  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.325401  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:28.325491  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:28.353611  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.353636  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:28.353642  306747 cri.go:89] found id: ""
	I1017 19:29:28.353649  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:28.353708  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.357789  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.361132  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:28.361209  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:28.388364  306747 cri.go:89] found id: ""
	I1017 19:29:28.388392  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.388401  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:28.388408  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:28.388471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:28.414080  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:28.414105  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:28.414111  306747 cri.go:89] found id: ""
	I1017 19:29:28.414119  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:28.414176  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.417894  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.421494  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:28.421617  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:28.448583  306747 cri.go:89] found id: ""
	I1017 19:29:28.448611  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.448620  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:28.448626  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:28.448683  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:28.481175  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:28.481198  306747 cri.go:89] found id: ""
	I1017 19:29:28.481208  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:28.481262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.485099  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:28.485212  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:28.511543  306747 cri.go:89] found id: ""
	I1017 19:29:28.511569  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.511577  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:28.511586  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:28.511617  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:28.606473  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:28.606511  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:28.626545  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:28.626577  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:28.697168  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:28.689422    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.690138    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.691704    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.692016    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.693514    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:28.689422    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.690138    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.691704    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.692016    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.693514    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:28.697191  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:28.697204  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.750046  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:28.750080  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:28.818139  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:28.818172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:28.847832  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:28.847916  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:28.928453  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:28.928489  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:28.959160  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:28.959188  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:28.986346  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:28.986374  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:29.037329  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:29.037364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:31.569631  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:31.580386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:31.580488  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:31.606748  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:31.606776  306747 cri.go:89] found id: ""
	I1017 19:29:31.606786  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:31.606861  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.610709  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:31.610808  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:31.637721  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:31.637742  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:31.637747  306747 cri.go:89] found id: ""
	I1017 19:29:31.637754  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:31.637831  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.641550  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.644918  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:31.644994  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:31.671222  306747 cri.go:89] found id: ""
	I1017 19:29:31.671248  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.671257  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:31.671263  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:31.671320  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:31.698318  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:31.698341  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:31.698347  306747 cri.go:89] found id: ""
	I1017 19:29:31.698354  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:31.698409  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.702033  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.705305  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:31.705406  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:31.733910  306747 cri.go:89] found id: ""
	I1017 19:29:31.733940  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.733949  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:31.733956  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:31.734012  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:31.759712  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:31.759743  306747 cri.go:89] found id: ""
	I1017 19:29:31.759752  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:31.759802  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.763496  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:31.763571  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:31.789631  306747 cri.go:89] found id: ""
	I1017 19:29:31.789656  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.789665  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:31.789684  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:31.789701  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:31.907913  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:31.907961  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:31.927231  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:31.927316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:32.018355  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:32.018394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:32.062156  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:32.062194  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:32.153927  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:32.153962  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:32.187982  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:32.188010  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:32.258773  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:32.251239    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.251763    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253326    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253710    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.255187    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:32.251239    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.251763    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253326    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253710    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.255187    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:32.258796  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:32.258835  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:32.290660  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:32.290689  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:32.368997  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:32.369029  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:32.400957  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:32.400988  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:34.933742  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:34.945067  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:34.945160  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:34.975919  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:34.975944  306747 cri.go:89] found id: ""
	I1017 19:29:34.975952  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:34.976011  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:34.979876  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:34.979963  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:35.007426  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:35.007451  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:35.007456  306747 cri.go:89] found id: ""
	I1017 19:29:35.007464  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:35.007526  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.013588  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.018178  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:35.018277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:35.048204  306747 cri.go:89] found id: ""
	I1017 19:29:35.048239  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.048248  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:35.048255  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:35.048315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:35.083329  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:35.083352  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:35.083358  306747 cri.go:89] found id: ""
	I1017 19:29:35.083366  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:35.083430  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.088406  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.094362  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:35.094435  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:35.125078  306747 cri.go:89] found id: ""
	I1017 19:29:35.125160  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.125185  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:35.125198  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:35.125277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:35.153519  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:35.153543  306747 cri.go:89] found id: ""
	I1017 19:29:35.153552  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:35.153605  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.157388  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:35.157485  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:35.189018  306747 cri.go:89] found id: ""
	I1017 19:29:35.189086  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.189113  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:35.189142  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:35.189185  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:35.290719  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:35.290763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:35.310771  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:35.310803  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:35.386443  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:35.376912    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.377784    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379400    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379730    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.381228    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:35.376912    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.377784    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379400    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379730    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.381228    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:35.386470  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:35.386484  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:35.442234  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:35.442274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:35.480866  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:35.480896  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:35.549288  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:35.549326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:35.576073  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:35.576102  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:35.611273  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:35.611308  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:35.639731  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:35.639763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:35.671118  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:35.671148  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:38.244668  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:38.257170  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:38.257244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:38.283218  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:38.283238  306747 cri.go:89] found id: ""
	I1017 19:29:38.283247  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:38.283305  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.287299  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:38.287365  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:38.314528  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:38.314550  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:38.314555  306747 cri.go:89] found id: ""
	I1017 19:29:38.314563  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:38.314614  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.318298  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.321948  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:38.322042  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:38.349464  306747 cri.go:89] found id: ""
	I1017 19:29:38.349503  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.349516  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:38.349538  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:38.349626  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:38.379503  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:38.379565  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:38.379583  306747 cri.go:89] found id: ""
	I1017 19:29:38.379608  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:38.379675  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.383360  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.387192  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:38.387298  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:38.421165  306747 cri.go:89] found id: ""
	I1017 19:29:38.421190  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.421199  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:38.421205  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:38.421293  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:38.449443  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:38.449509  306747 cri.go:89] found id: ""
	I1017 19:29:38.449530  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:38.449608  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.453406  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:38.453530  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:38.480577  306747 cri.go:89] found id: ""
	I1017 19:29:38.480640  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.480662  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:38.480687  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:38.480712  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:38.558339  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:38.558375  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:38.588992  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:38.589018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:38.688443  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:38.688478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:38.705940  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:38.706012  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:38.738810  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:38.738836  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:38.765665  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:38.765693  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:38.841021  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:38.831886    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.832670    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.834636    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.835450    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.837074    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:38.831886    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.832670    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.834636    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.835450    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.837074    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:38.841095  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:38.841115  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:38.870763  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:38.870791  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:38.943129  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:38.943162  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:38.984504  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:38.984583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.577128  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:41.588152  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:41.588230  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:41.616214  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:41.616251  306747 cri.go:89] found id: ""
	I1017 19:29:41.616261  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:41.616333  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.620228  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:41.620301  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:41.647140  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:41.647166  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:41.647172  306747 cri.go:89] found id: ""
	I1017 19:29:41.647180  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:41.647241  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.650918  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.654626  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:41.654701  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:41.680974  306747 cri.go:89] found id: ""
	I1017 19:29:41.680999  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.681008  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:41.681014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:41.681071  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:41.707036  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.707071  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:41.707076  306747 cri.go:89] found id: ""
	I1017 19:29:41.707084  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:41.707137  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.710947  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.714920  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:41.715001  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:41.741927  306747 cri.go:89] found id: ""
	I1017 19:29:41.741952  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.741962  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:41.741968  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:41.742026  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:41.766904  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:41.766928  306747 cri.go:89] found id: ""
	I1017 19:29:41.766936  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:41.766989  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.770640  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:41.770722  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:41.797979  306747 cri.go:89] found id: ""
	I1017 19:29:41.798007  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.798017  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:41.798026  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:41.798038  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:41.815570  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:41.815602  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:41.872205  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:41.872246  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:41.910906  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:41.910942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.996670  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:41.996709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:42.033766  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:42.033804  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:42.143006  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:42.143055  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:42.258670  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:42.246629    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.247190    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.249238    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.250318    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.251136    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:42.246629    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.247190    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.249238    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.250318    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.251136    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:42.258694  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:42.258709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:42.294390  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:42.294422  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:42.328168  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:42.328202  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:42.357875  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:42.357932  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:44.934951  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:44.945451  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:44.945522  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:44.979178  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:44.979201  306747 cri.go:89] found id: ""
	I1017 19:29:44.979209  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:44.979263  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:44.983046  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:44.983126  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:45.035414  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:45.035438  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:45.035443  306747 cri.go:89] found id: ""
	I1017 19:29:45.035451  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:45.035519  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.048433  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.053636  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:45.053716  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:45.120373  306747 cri.go:89] found id: ""
	I1017 19:29:45.120397  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.120406  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:45.120414  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:45.120482  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:45.167585  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:45.167667  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:45.167692  306747 cri.go:89] found id: ""
	I1017 19:29:45.167719  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:45.167819  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.173369  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.178434  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:45.178531  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:45.220087  306747 cri.go:89] found id: ""
	I1017 19:29:45.220115  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.220125  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:45.220132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:45.220222  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:45.275433  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:45.275475  306747 cri.go:89] found id: ""
	I1017 19:29:45.275484  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:45.275559  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.281184  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:45.281323  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:45.323004  306747 cri.go:89] found id: ""
	I1017 19:29:45.323106  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.323137  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:45.323188  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:45.323238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:45.371491  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:45.371598  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:45.464170  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:45.455221    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.456745    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.457962    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.458630    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.460252    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:45.455221    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.456745    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.457962    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.458630    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.460252    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:45.464194  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:45.464206  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:45.499416  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:45.499445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:45.536994  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:45.537028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:45.615136  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:45.615172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:45.720244  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:45.720281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:45.778577  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:45.778610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:45.859732  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:45.859813  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:45.896812  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:45.896889  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:45.929734  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:45.929763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:48.461978  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:48.472688  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:48.472759  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:48.499995  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:48.500019  306747 cri.go:89] found id: ""
	I1017 19:29:48.500028  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:48.500084  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.504256  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:48.504330  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:48.533568  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:48.533627  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:48.533647  306747 cri.go:89] found id: ""
	I1017 19:29:48.533662  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:48.533722  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.538269  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.542307  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:48.542388  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:48.572286  306747 cri.go:89] found id: ""
	I1017 19:29:48.572355  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.572379  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:48.572405  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:48.572499  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:48.599218  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:48.599246  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:48.599251  306747 cri.go:89] found id: ""
	I1017 19:29:48.599259  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:48.599310  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.603036  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.606361  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:48.606471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:48.631930  306747 cri.go:89] found id: ""
	I1017 19:29:48.631966  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.631975  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:48.631982  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:48.632052  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:48.658684  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:48.658711  306747 cri.go:89] found id: ""
	I1017 19:29:48.658720  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:48.658773  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.662512  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:48.662586  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:48.688997  306747 cri.go:89] found id: ""
	I1017 19:29:48.689022  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.689031  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:48.689041  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:48.689052  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:48.789868  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:48.789919  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:48.860960  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:48.850451    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.851072    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852664    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852967    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.854822    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:48.850451    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.851072    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852664    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852967    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.854822    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:48.860984  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:48.861000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:48.933293  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:48.933334  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:48.961662  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:48.961692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:48.998503  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:48.998533  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:49.030219  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:49.030292  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:49.048915  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:49.048949  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:49.075217  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:49.075256  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:49.132824  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:49.132859  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:49.166233  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:49.166269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:51.747014  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:51.757581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:51.757655  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:51.783413  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:51.783436  306747 cri.go:89] found id: ""
	I1017 19:29:51.783444  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:51.783499  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.787489  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:51.787553  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:51.815381  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:51.815404  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:51.815408  306747 cri.go:89] found id: ""
	I1017 19:29:51.815415  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:51.815467  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.819345  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.822754  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:51.822830  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:51.863882  306747 cri.go:89] found id: ""
	I1017 19:29:51.863922  306747 logs.go:282] 0 containers: []
	W1017 19:29:51.863931  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:51.863937  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:51.863997  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:51.896342  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:51.896414  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:51.896433  306747 cri.go:89] found id: ""
	I1017 19:29:51.896457  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:51.896574  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.900688  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.905025  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:51.905156  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:51.950302  306747 cri.go:89] found id: ""
	I1017 19:29:51.950325  306747 logs.go:282] 0 containers: []
	W1017 19:29:51.950333  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:51.950339  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:51.950408  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:51.984143  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:51.984164  306747 cri.go:89] found id: ""
	I1017 19:29:51.984172  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:51.984225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.988312  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:51.988387  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:52.024692  306747 cri.go:89] found id: ""
	I1017 19:29:52.024720  306747 logs.go:282] 0 containers: []
	W1017 19:29:52.024729  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:52.024738  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:52.024750  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:52.043591  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:52.043708  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:52.083962  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:52.084045  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:52.156858  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:52.149368    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.149750    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151218    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151521    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.152949    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:52.149368    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.149750    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151218    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151521    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.152949    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:52.156879  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:52.156894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:52.183367  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:52.183396  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:52.244364  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:52.244445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:52.277850  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:52.277883  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:52.363433  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:52.363473  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:52.392573  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:52.392602  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:52.421470  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:52.421499  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:52.502975  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:52.503014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:55.106386  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:55.118281  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:55.118357  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:55.147588  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:55.147612  306747 cri.go:89] found id: ""
	I1017 19:29:55.147625  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:55.147679  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.151460  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:55.151530  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:55.179417  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:55.179441  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:55.179447  306747 cri.go:89] found id: ""
	I1017 19:29:55.179455  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:55.179512  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.184062  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.187762  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:55.187876  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:55.214159  306747 cri.go:89] found id: ""
	I1017 19:29:55.214187  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.214196  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:55.214203  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:55.214268  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:55.244963  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:55.244987  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:55.244992  306747 cri.go:89] found id: ""
	I1017 19:29:55.244999  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:55.245052  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.250157  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.256061  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:55.256151  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:55.287091  306747 cri.go:89] found id: ""
	I1017 19:29:55.287114  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.287122  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:55.287128  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:55.287192  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:55.316175  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:55.316245  306747 cri.go:89] found id: ""
	I1017 19:29:55.316268  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:55.316359  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.321292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:55.321374  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:55.348125  306747 cri.go:89] found id: ""
	I1017 19:29:55.348151  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.348160  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:55.348169  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:55.348181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:55.380783  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:55.380812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:55.414351  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:55.414386  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:55.484774  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:55.475182    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.476192    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478010    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478543    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.480183    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:55.475182    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.476192    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478010    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478543    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.480183    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:55.484796  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:55.484809  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:55.556984  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:55.557018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:55.625177  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:55.625251  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:55.655370  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:55.655398  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:55.680829  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:55.680860  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:55.763300  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:55.763331  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:55.803920  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:55.803954  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:55.900738  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:55.900773  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:58.422801  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:58.433443  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:58.433516  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:58.464116  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:58.464136  306747 cri.go:89] found id: ""
	I1017 19:29:58.464144  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:58.464212  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.468047  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:58.468169  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:58.494945  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:58.494979  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:58.494985  306747 cri.go:89] found id: ""
	I1017 19:29:58.494993  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:58.495058  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.498896  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.502320  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:58.502386  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:58.531527  306747 cri.go:89] found id: ""
	I1017 19:29:58.531550  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.531558  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:58.531564  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:58.531623  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:58.558316  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:58.558337  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:58.558342  306747 cri.go:89] found id: ""
	I1017 19:29:58.558350  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:58.558403  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.562311  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.565856  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:58.565960  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:58.591130  306747 cri.go:89] found id: ""
	I1017 19:29:58.591156  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.591164  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:58.591173  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:58.591229  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:58.618142  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:58.618221  306747 cri.go:89] found id: ""
	I1017 19:29:58.618237  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:58.618297  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.621817  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:58.621888  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:58.651258  306747 cri.go:89] found id: ""
	I1017 19:29:58.651284  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.651293  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:58.651302  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:58.651315  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:58.720909  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:58.720942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:58.748703  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:58.748729  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:58.776433  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:58.776463  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:58.851007  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:58.851041  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:58.884351  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:58.884382  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:58.957941  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:58.949361    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.950154    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.951742    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.952330    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.954025    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:58.949361    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.950154    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.951742    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.952330    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.954025    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:58.957961  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:58.957974  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:58.987459  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:58.987531  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:59.026978  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:59.027008  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:59.128822  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:59.128858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:59.146047  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:59.146079  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:01.705070  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:01.718647  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:01.718748  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:01.753347  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:01.753387  306747 cri.go:89] found id: ""
	I1017 19:30:01.753395  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:01.753457  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.757741  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:01.757850  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:01.786783  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:01.786861  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:01.786873  306747 cri.go:89] found id: ""
	I1017 19:30:01.786882  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:01.787029  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.791549  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.796677  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:01.796752  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:01.826434  306747 cri.go:89] found id: ""
	I1017 19:30:01.826462  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.826472  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:01.826478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:01.826543  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:01.863544  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:01.863569  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:01.863574  306747 cri.go:89] found id: ""
	I1017 19:30:01.863582  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:01.863639  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.867992  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.872125  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:01.872206  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:01.908249  306747 cri.go:89] found id: ""
	I1017 19:30:01.908276  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.908285  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:01.908292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:01.908354  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:01.936971  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:01.937001  306747 cri.go:89] found id: ""
	I1017 19:30:01.937010  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:01.937105  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.941357  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:01.941426  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:01.982542  306747 cri.go:89] found id: ""
	I1017 19:30:01.982569  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.982578  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:01.982593  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:01.982606  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:02.018942  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:02.018970  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:02.099513  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:02.099556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:02.137502  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:02.137532  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:02.185697  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:02.185738  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:02.288795  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:02.288835  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:02.336210  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:02.336248  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:02.422878  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:02.422917  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:02.453635  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:02.453662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:02.540123  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:02.540164  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:02.558457  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:02.558491  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:02.629161  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:02.619096   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.619981   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.621652   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.622279   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.624619   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:02.619096   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.619981   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.621652   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.622279   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.624619   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:05.130448  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:05.144120  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:05.144214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:05.175291  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:05.175324  306747 cri.go:89] found id: ""
	I1017 19:30:05.175334  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:05.175394  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.179428  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:05.179514  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:05.212486  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:05.212511  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:05.212541  306747 cri.go:89] found id: ""
	I1017 19:30:05.212550  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:05.212606  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.216463  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.220220  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:05.220295  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:05.249597  306747 cri.go:89] found id: ""
	I1017 19:30:05.249624  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.249633  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:05.249640  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:05.249706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:05.276856  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:05.276878  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:05.276883  306747 cri.go:89] found id: ""
	I1017 19:30:05.276890  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:05.276945  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.280586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.284132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:05.284196  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:05.312051  306747 cri.go:89] found id: ""
	I1017 19:30:05.312081  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.312090  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:05.312096  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:05.312154  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:05.339324  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:05.339345  306747 cri.go:89] found id: ""
	I1017 19:30:05.339353  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:05.339406  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.343274  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:05.343351  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:05.371042  306747 cri.go:89] found id: ""
	I1017 19:30:05.371067  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.371076  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:05.371086  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:05.371103  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:05.395923  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:05.395957  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:05.453746  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:05.453785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:05.495400  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:05.495436  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:05.522354  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:05.522384  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:05.603168  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:05.603203  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:05.635130  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:05.635158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:05.730159  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:05.730196  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:05.805436  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:05.797321   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.798191   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.799878   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.800180   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.801717   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:05.797321   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.798191   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.799878   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.800180   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.801717   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:05.805458  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:05.805471  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:05.831415  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:05.831453  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:05.915270  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:05.915309  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:08.445553  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:08.457157  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:08.457224  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:08.489306  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:08.489335  306747 cri.go:89] found id: ""
	I1017 19:30:08.489344  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:08.489399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.493424  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:08.493497  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:08.523021  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:08.523056  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:08.523061  306747 cri.go:89] found id: ""
	I1017 19:30:08.523069  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:08.523133  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.527165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.530929  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:08.531043  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:08.560240  306747 cri.go:89] found id: ""
	I1017 19:30:08.560266  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.560275  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:08.560282  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:08.560340  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:08.587950  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:08.587974  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:08.587979  306747 cri.go:89] found id: ""
	I1017 19:30:08.587987  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:08.588048  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.591797  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.595627  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:08.595710  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:08.622023  306747 cri.go:89] found id: ""
	I1017 19:30:08.622048  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.622057  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:08.622064  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:08.622123  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:08.652098  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:08.652194  306747 cri.go:89] found id: ""
	I1017 19:30:08.652232  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:08.652399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.657095  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:08.657180  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:08.687380  306747 cri.go:89] found id: ""
	I1017 19:30:08.687404  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.687412  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:08.687421  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:08.687433  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:08.785046  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:08.785084  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:08.815287  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:08.815318  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:08.880972  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:08.881008  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:08.919918  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:08.919947  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:08.994592  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:08.994632  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:09.029806  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:09.029833  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:09.059196  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:09.059224  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:09.077625  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:09.077658  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:09.155722  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:09.147557   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.148286   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.149973   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.150565   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.152238   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:09.147557   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.148286   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.149973   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.150565   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.152238   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:09.155746  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:09.155759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:09.230856  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:09.230895  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:11.763218  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:11.774210  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:11.774310  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:11.807759  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:11.807778  306747 cri.go:89] found id: ""
	I1017 19:30:11.807786  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:11.807840  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.812129  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:11.812202  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:11.840430  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:11.840451  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:11.840459  306747 cri.go:89] found id: ""
	I1017 19:30:11.840467  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:11.840562  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.844491  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.848972  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:11.849065  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:11.876962  306747 cri.go:89] found id: ""
	I1017 19:30:11.876986  306747 logs.go:282] 0 containers: []
	W1017 19:30:11.876994  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:11.877000  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:11.877060  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:11.907338  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:11.907402  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:11.907421  306747 cri.go:89] found id: ""
	I1017 19:30:11.907446  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:11.907534  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.911700  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.915708  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:11.915823  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:11.945931  306747 cri.go:89] found id: ""
	I1017 19:30:11.945968  306747 logs.go:282] 0 containers: []
	W1017 19:30:11.945976  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:11.945983  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:11.946041  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:11.973489  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:11.973509  306747 cri.go:89] found id: ""
	I1017 19:30:11.973517  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:11.973582  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.979325  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:11.979401  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:12.006387  306747 cri.go:89] found id: ""
	I1017 19:30:12.006415  306747 logs.go:282] 0 containers: []
	W1017 19:30:12.006425  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:12.006437  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:12.006452  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:12.112142  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:12.112180  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:12.130633  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:12.130662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:12.219234  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:12.204079   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.204586   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.208545   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.212324   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.214784   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:12.204079   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.204586   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.208545   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.212324   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.214784   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:12.219259  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:12.219274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:12.248889  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:12.248918  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:12.284961  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:12.284995  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:12.360893  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:12.360930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:12.394406  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:12.394433  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:12.420215  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:12.420245  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:12.477947  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:12.477980  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:12.559952  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:12.559989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:15.098061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:15.110601  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:15.110673  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:15.142831  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:15.142854  306747 cri.go:89] found id: ""
	I1017 19:30:15.142863  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:15.142922  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.147216  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:15.147336  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:15.177462  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:15.177487  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:15.177492  306747 cri.go:89] found id: ""
	I1017 19:30:15.177500  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:15.177556  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.182001  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.186668  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:15.186752  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:15.218350  306747 cri.go:89] found id: ""
	I1017 19:30:15.218375  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.218383  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:15.218389  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:15.218449  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:15.247656  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:15.247730  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:15.247750  306747 cri.go:89] found id: ""
	I1017 19:30:15.247774  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:15.247847  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.251499  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.254966  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:15.255039  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:15.282034  306747 cri.go:89] found id: ""
	I1017 19:30:15.282056  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.282065  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:15.282071  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:15.282131  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:15.313582  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:15.313643  306747 cri.go:89] found id: ""
	I1017 19:30:15.313665  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:15.313739  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.317325  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:15.317407  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:15.343894  306747 cri.go:89] found id: ""
	I1017 19:30:15.343921  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.343937  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:15.343947  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:15.343967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:15.416772  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:15.408215   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.409020   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410494   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410798   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.412827   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:15.408215   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.409020   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410494   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410798   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.412827   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:15.416794  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:15.416807  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:15.455991  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:15.456060  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:15.533107  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:15.533144  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:15.605424  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:15.605464  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:15.633544  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:15.633572  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:15.710509  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:15.710545  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:15.744271  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:15.744352  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:15.844584  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:15.844621  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:15.865714  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:15.865745  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:15.910911  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:15.910945  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:18.440664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:18.451576  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:18.451643  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:18.480927  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:18.480948  306747 cri.go:89] found id: ""
	I1017 19:30:18.480956  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:18.481010  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.484797  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:18.484886  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:18.512958  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:18.513034  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:18.513045  306747 cri.go:89] found id: ""
	I1017 19:30:18.513053  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:18.513106  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.516855  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.520298  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:18.520369  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:18.546427  306747 cri.go:89] found id: ""
	I1017 19:30:18.546453  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.546462  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:18.546468  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:18.546532  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:18.573945  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:18.574007  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:18.574021  306747 cri.go:89] found id: ""
	I1017 19:30:18.574030  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:18.574094  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.577681  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.581276  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:18.581357  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:18.607914  306747 cri.go:89] found id: ""
	I1017 19:30:18.607941  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.607950  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:18.607956  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:18.608013  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:18.634762  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:18.634781  306747 cri.go:89] found id: ""
	I1017 19:30:18.634789  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:18.634842  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.638638  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:18.638754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:18.666586  306747 cri.go:89] found id: ""
	I1017 19:30:18.666610  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.666618  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:18.666627  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:18.666639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:18.685607  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:18.685637  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:18.740058  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:18.740088  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:18.816374  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:18.816410  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:18.842654  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:18.842686  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:18.921888  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:18.913390   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.913958   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.915701   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.916258   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.918025   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:18.913390   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.913958   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.915701   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.916258   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.918025   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:18.921914  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:18.921930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:18.948267  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:18.948298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:19.003855  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:19.003894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:19.033396  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:19.033424  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:19.128308  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:19.128353  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:19.162140  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:19.162166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:21.764178  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:21.775522  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:21.775596  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:21.803342  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:21.803367  306747 cri.go:89] found id: ""
	I1017 19:30:21.803377  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:21.803442  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.807522  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:21.807598  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:21.836696  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:21.836720  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:21.836726  306747 cri.go:89] found id: ""
	I1017 19:30:21.836734  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:21.836789  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.840752  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.844455  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:21.844557  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:21.872104  306747 cri.go:89] found id: ""
	I1017 19:30:21.872131  306747 logs.go:282] 0 containers: []
	W1017 19:30:21.872140  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:21.872147  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:21.872210  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:21.908413  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:21.908439  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:21.908448  306747 cri.go:89] found id: ""
	I1017 19:30:21.908455  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:21.908513  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.912640  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.916402  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:21.916476  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:21.950380  306747 cri.go:89] found id: ""
	I1017 19:30:21.950466  306747 logs.go:282] 0 containers: []
	W1017 19:30:21.950498  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:21.950517  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:21.950628  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:21.983152  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:21.983177  306747 cri.go:89] found id: ""
	I1017 19:30:21.983187  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:21.983243  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.986962  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:21.987037  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:22.019909  306747 cri.go:89] found id: ""
	I1017 19:30:22.019935  306747 logs.go:282] 0 containers: []
	W1017 19:30:22.019944  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:22.019953  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:22.019996  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:22.069135  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:22.069175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:22.103886  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:22.103916  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:22.133109  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:22.133136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:22.215579  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:22.215617  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:22.297981  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:22.289181   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.289836   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291072   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291590   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.293032   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:22.289181   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.289836   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291072   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291590   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.293032   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:22.298003  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:22.298017  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:22.373102  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:22.373140  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:22.406083  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:22.406110  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:22.506621  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:22.506659  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:22.526268  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:22.526299  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:22.557755  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:22.557784  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.116647  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:25.128310  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:25.128412  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:25.158258  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:25.158281  306747 cri.go:89] found id: ""
	I1017 19:30:25.158293  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:25.158358  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.162693  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:25.162773  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:25.197276  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.197301  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:25.197307  306747 cri.go:89] found id: ""
	I1017 19:30:25.197315  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:25.197407  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.201342  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.205350  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:25.205422  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:25.233590  306747 cri.go:89] found id: ""
	I1017 19:30:25.233617  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.233627  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:25.233634  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:25.233693  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:25.260459  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:25.260486  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:25.260492  306747 cri.go:89] found id: ""
	I1017 19:30:25.260500  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:25.260582  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.266116  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.269609  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:25.269709  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:25.299945  306747 cri.go:89] found id: ""
	I1017 19:30:25.299970  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.299979  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:25.299986  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:25.300062  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:25.327588  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:25.327611  306747 cri.go:89] found id: ""
	I1017 19:30:25.327619  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:25.327695  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.331614  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:25.331714  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:25.360945  306747 cri.go:89] found id: ""
	I1017 19:30:25.360969  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.360978  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:25.360987  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:25.361018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.419332  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:25.419371  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:25.455422  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:25.455454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:25.533420  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:25.533454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:25.561277  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:25.561303  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:25.589003  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:25.589032  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:25.667191  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:25.667225  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:25.697081  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:25.697108  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:25.796723  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:25.796756  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:25.817825  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:25.817854  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:25.895602  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:25.887039   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.887933   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.889709   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.890373   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.891870   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:25.887039   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.887933   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.889709   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.890373   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.891870   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:25.895626  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:25.895639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:28.421545  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:28.432472  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:28.432573  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:28.461368  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:28.461391  306747 cri.go:89] found id: ""
	I1017 19:30:28.461400  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:28.461454  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.466145  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:28.466221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:28.496790  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:28.496814  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:28.496822  306747 cri.go:89] found id: ""
	I1017 19:30:28.496830  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:28.496886  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.500588  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.504150  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:28.504250  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:28.530114  306747 cri.go:89] found id: ""
	I1017 19:30:28.530141  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.530150  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:28.530157  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:28.530257  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:28.560630  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:28.560660  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:28.560675  306747 cri.go:89] found id: ""
	I1017 19:30:28.560684  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:28.560737  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.564422  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.568093  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:28.568165  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:28.598927  306747 cri.go:89] found id: ""
	I1017 19:30:28.598954  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.598963  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:28.598969  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:28.599075  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:28.625977  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:28.626001  306747 cri.go:89] found id: ""
	I1017 19:30:28.626010  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:28.626090  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.629847  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:28.629929  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:28.656469  306747 cri.go:89] found id: ""
	I1017 19:30:28.656494  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.656503  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:28.656513  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:28.656548  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:28.758826  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:28.758863  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:28.778387  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:28.778416  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:28.845382  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:28.837571   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.838156   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.839753   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.840320   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.841429   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:28.837571   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.838156   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.839753   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.840320   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.841429   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:28.845407  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:28.845420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:28.889092  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:28.889167  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:28.970950  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:28.970986  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:29.003996  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:29.004028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:29.064888  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:29.064926  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:29.105700  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:29.105729  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:29.141040  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:29.141066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:29.224674  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:29.224710  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:31.757505  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:31.767848  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:31.767914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:31.800059  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:31.800082  306747 cri.go:89] found id: ""
	I1017 19:30:31.800093  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:31.800147  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.803723  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:31.803795  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:31.830502  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:31.830525  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:31.830530  306747 cri.go:89] found id: ""
	I1017 19:30:31.830546  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:31.830600  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.834866  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.838218  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:31.838293  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:31.866917  306747 cri.go:89] found id: ""
	I1017 19:30:31.866944  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.866953  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:31.866960  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:31.867015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:31.898652  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:31.898673  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:31.898679  306747 cri.go:89] found id: ""
	I1017 19:30:31.898692  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:31.898745  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.902404  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.905916  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:31.906005  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:31.936988  306747 cri.go:89] found id: ""
	I1017 19:30:31.937055  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.937080  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:31.937103  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:31.937192  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:31.965478  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:31.965506  306747 cri.go:89] found id: ""
	I1017 19:30:31.965515  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:31.965570  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.969541  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:31.969611  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:31.997913  306747 cri.go:89] found id: ""
	I1017 19:30:31.997936  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.997945  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:31.997954  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:31.997967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:32.075635  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:32.076176  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:32.124512  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:32.124607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:32.203895  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:32.203930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:32.237712  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:32.237745  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:32.265784  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:32.265812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:32.296288  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:32.296316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:32.413833  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:32.413869  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:32.431287  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:32.431316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:32.496198  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:32.487969   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.488616   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490480   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490935   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.492578   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:32.487969   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.488616   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490480   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490935   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.492578   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:32.496222  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:32.496238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:32.522527  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:32.522556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.098806  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:35.114025  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:35.114098  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:35.150192  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:35.150215  306747 cri.go:89] found id: ""
	I1017 19:30:35.150224  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:35.150291  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.154431  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:35.154528  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:35.187248  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.187274  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:35.187280  306747 cri.go:89] found id: ""
	I1017 19:30:35.187288  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:35.187342  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.190988  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.194467  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:35.194544  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:35.226183  306747 cri.go:89] found id: ""
	I1017 19:30:35.226209  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.226228  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:35.226277  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:35.226345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:35.254492  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:35.254514  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:35.254532  306747 cri.go:89] found id: ""
	I1017 19:30:35.254542  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:35.254600  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.258515  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.262160  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:35.262245  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:35.290479  306747 cri.go:89] found id: ""
	I1017 19:30:35.290556  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.290573  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:35.290581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:35.290647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:35.320673  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:35.320696  306747 cri.go:89] found id: ""
	I1017 19:30:35.320705  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:35.320760  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.324577  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:35.324650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:35.351615  306747 cri.go:89] found id: ""
	I1017 19:30:35.351643  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.351652  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:35.351662  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:35.351674  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:35.426069  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:35.414413   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.418263   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419343   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419972   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.421885   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:35.414413   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.418263   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419343   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419972   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.421885   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:35.426092  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:35.426105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:35.458415  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:35.458445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.532727  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:35.532763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:35.570789  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:35.570821  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:35.654656  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:35.654691  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:35.682337  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:35.682368  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:35.783217  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:35.783263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:35.809044  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:35.809075  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:35.836181  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:35.836213  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:35.922975  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:35.923013  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:38.460477  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:38.471359  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:38.471462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:38.500899  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:38.500923  306747 cri.go:89] found id: ""
	I1017 19:30:38.500932  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:38.501005  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.505166  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:38.505244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:38.531743  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:38.531766  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:38.531771  306747 cri.go:89] found id: ""
	I1017 19:30:38.531779  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:38.531842  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.535645  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.539501  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:38.539580  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:38.568890  306747 cri.go:89] found id: ""
	I1017 19:30:38.568915  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.568923  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:38.568929  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:38.568989  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:38.594452  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:38.594476  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:38.594482  306747 cri.go:89] found id: ""
	I1017 19:30:38.594490  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:38.594544  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.598456  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.606409  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:38.606483  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:38.632993  306747 cri.go:89] found id: ""
	I1017 19:30:38.633015  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.633024  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:38.633030  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:38.633091  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:38.659776  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:38.659800  306747 cri.go:89] found id: ""
	I1017 19:30:38.659809  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:38.659861  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.663404  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:38.663507  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:38.688978  306747 cri.go:89] found id: ""
	I1017 19:30:38.689003  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.689012  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:38.689021  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:38.689033  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:38.722471  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:38.722497  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:38.800538  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:38.800575  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:38.832423  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:38.832451  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:38.939609  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:38.939648  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:38.959665  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:38.959701  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:39.039314  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:39.030321   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.030924   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.032747   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.033627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.034935   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:39.030321   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.030924   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.032747   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.033627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.034935   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:39.039340  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:39.039355  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:39.113637  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:39.113709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:39.148504  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:39.148662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:39.223019  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:39.223056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:39.253605  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:39.253635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:41.780640  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:41.791876  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:41.791949  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:41.819510  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:41.819583  306747 cri.go:89] found id: ""
	I1017 19:30:41.819606  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:41.819691  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.824390  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:41.824462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:41.856605  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:41.856636  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:41.856642  306747 cri.go:89] found id: ""
	I1017 19:30:41.856649  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:41.856715  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.864466  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.868588  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:41.868666  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:41.903466  306747 cri.go:89] found id: ""
	I1017 19:30:41.903498  306747 logs.go:282] 0 containers: []
	W1017 19:30:41.903507  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:41.903514  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:41.903571  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:41.930657  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:41.930682  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:41.930687  306747 cri.go:89] found id: ""
	I1017 19:30:41.930694  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:41.930749  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.934754  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.938781  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:41.938871  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:41.968280  306747 cri.go:89] found id: ""
	I1017 19:30:41.968306  306747 logs.go:282] 0 containers: []
	W1017 19:30:41.968315  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:41.968322  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:41.968402  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:41.995850  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:41.995931  306747 cri.go:89] found id: ""
	I1017 19:30:41.995955  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:41.996030  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.999630  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:41.999700  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:42.044891  306747 cri.go:89] found id: ""
	I1017 19:30:42.044926  306747 logs.go:282] 0 containers: []
	W1017 19:30:42.044935  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:42.044952  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:42.044971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:42.174128  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:42.174267  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:42.224381  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:42.224413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:42.333478  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:42.333518  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:42.353368  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:42.353403  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:42.391604  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:42.391635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:42.426317  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:42.426347  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:42.503367  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:42.494794   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.495471   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497096   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497695   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.499206   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:42.494794   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.495471   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497096   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497695   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.499206   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:42.503388  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:42.503401  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:42.560324  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:42.560359  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:42.632932  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:42.632968  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:42.665758  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:42.665844  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.196869  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:45.213931  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:45.214024  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:45.259283  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:45.259312  306747 cri.go:89] found id: ""
	I1017 19:30:45.259321  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:45.259390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.265805  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:45.265913  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:45.316071  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:45.316098  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:45.316103  306747 cri.go:89] found id: ""
	I1017 19:30:45.316112  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:45.316178  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.329246  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.342518  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:45.342722  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:45.403649  306747 cri.go:89] found id: ""
	I1017 19:30:45.403681  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.403691  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:45.403700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:45.403771  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:45.436373  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:45.436398  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:45.436404  306747 cri.go:89] found id: ""
	I1017 19:30:45.436412  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:45.436470  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.442171  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.446282  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:45.446378  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:45.480185  306747 cri.go:89] found id: ""
	I1017 19:30:45.480211  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.480269  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:45.480281  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:45.480348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:45.519821  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.519845  306747 cri.go:89] found id: ""
	I1017 19:30:45.519853  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:45.519916  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.523961  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:45.524044  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:45.553268  306747 cri.go:89] found id: ""
	I1017 19:30:45.553295  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.553336  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:45.553353  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:45.553376  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:45.581168  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:45.581199  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:45.659459  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:45.659495  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:45.698325  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:45.698356  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:45.730552  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:45.730578  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.761205  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:45.761233  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:45.859241  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:45.859345  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:45.879219  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:45.879249  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:45.956579  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:45.956613  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:46.038168  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:46.038207  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:46.088885  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:46.088920  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:46.156435  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:46.147068   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.148033   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.149640   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.150155   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.151669   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:46.147068   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.148033   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.149640   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.150155   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.151669   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:48.657371  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:48.668345  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:48.668414  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:48.699974  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:48.699994  306747 cri.go:89] found id: ""
	I1017 19:30:48.700002  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:48.700055  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.703706  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:48.703773  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:48.729231  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:48.729255  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:48.729260  306747 cri.go:89] found id: ""
	I1017 19:30:48.729267  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:48.729347  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.733057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.736560  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:48.736650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:48.769891  306747 cri.go:89] found id: ""
	I1017 19:30:48.769917  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.769925  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:48.769932  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:48.769988  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:48.796614  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:48.796633  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:48.796638  306747 cri.go:89] found id: ""
	I1017 19:30:48.796645  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:48.796697  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.800347  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.803641  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:48.803707  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:48.829352  306747 cri.go:89] found id: ""
	I1017 19:30:48.829375  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.829384  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:48.829390  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:48.829448  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:48.863517  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:48.863542  306747 cri.go:89] found id: ""
	I1017 19:30:48.863551  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:48.863603  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.867339  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:48.867411  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:48.896584  306747 cri.go:89] found id: ""
	I1017 19:30:48.896609  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.896618  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:48.896626  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:48.896639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:48.990111  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:48.990146  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:49.015233  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:49.015265  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:49.040589  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:49.040623  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:49.100203  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:49.100237  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:49.135876  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:49.135909  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:49.168685  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:49.168756  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:49.211941  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:49.212009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:49.278129  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:49.270279   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.271015   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272492   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272926   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.274542   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:49.270279   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.271015   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272492   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272926   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.274542   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:49.278151  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:49.278166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:49.355582  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:49.355620  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:49.385861  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:49.385888  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:51.961962  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:51.973739  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:51.973839  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:52.007060  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:52.007089  306747 cri.go:89] found id: ""
	I1017 19:30:52.007098  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:52.007173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.011950  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:52.012025  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:52.043424  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:52.043445  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:52.043450  306747 cri.go:89] found id: ""
	I1017 19:30:52.043458  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:52.043515  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.048102  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.051750  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:52.051836  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:52.091285  306747 cri.go:89] found id: ""
	I1017 19:30:52.091362  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.091384  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:52.091412  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:52.091533  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:52.120853  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:52.120928  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:52.120947  306747 cri.go:89] found id: ""
	I1017 19:30:52.120962  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:52.121037  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.125047  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.128913  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:52.129029  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:52.155112  306747 cri.go:89] found id: ""
	I1017 19:30:52.155138  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.155147  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:52.155153  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:52.155217  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:52.181654  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:52.181678  306747 cri.go:89] found id: ""
	I1017 19:30:52.181686  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:52.181738  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.185468  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:52.185538  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:52.210532  306747 cri.go:89] found id: ""
	I1017 19:30:52.210558  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.210567  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:52.210577  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:52.210591  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:52.283758  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:52.283793  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:52.321133  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:52.321172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:52.349409  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:52.349440  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:52.454035  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:52.454072  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:52.474228  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:52.474336  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:52.549970  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:52.541938   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.542794   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.543926   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.544704   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.546272   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:52.541938   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.542794   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.543926   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.544704   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.546272   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:52.550045  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:52.550073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:52.637174  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:52.637221  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:52.668341  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:52.668418  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:52.761051  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:52.761091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:52.792065  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:52.792160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.319606  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:55.330935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:55.331008  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:55.358717  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.358739  306747 cri.go:89] found id: ""
	I1017 19:30:55.358747  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:55.358802  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.362654  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:55.362769  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:55.397277  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:55.397301  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:55.397306  306747 cri.go:89] found id: ""
	I1017 19:30:55.397314  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:55.397368  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.401240  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.405131  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:55.405244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:55.432480  306747 cri.go:89] found id: ""
	I1017 19:30:55.432602  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.432627  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:55.432666  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:55.432750  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:55.465240  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:55.465314  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:55.465333  306747 cri.go:89] found id: ""
	I1017 19:30:55.465357  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:55.465448  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.469415  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.473023  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:55.473096  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:55.499608  306747 cri.go:89] found id: ""
	I1017 19:30:55.499681  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.499704  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:55.499724  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:55.499814  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:55.526471  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:55.526494  306747 cri.go:89] found id: ""
	I1017 19:30:55.526502  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:55.526586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.530319  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:55.530395  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:55.558617  306747 cri.go:89] found id: ""
	I1017 19:30:55.558639  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.558647  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:55.558656  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:55.558668  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:55.578357  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:55.578390  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:55.642730  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:55.635023   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.635478   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637010   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637409   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.638832   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:55.635023   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.635478   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637010   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637409   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.638832   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:55.642749  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:55.642763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.673301  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:55.673329  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:55.735266  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:55.735301  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:55.777444  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:55.777474  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:55.891903  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:55.891985  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:55.976455  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:55.976492  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:56.005202  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:56.005238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:56.034021  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:56.034049  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:56.086550  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:56.086581  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:58.687094  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:58.698343  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:58.698420  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:58.737082  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:58.737144  306747 cri.go:89] found id: ""
	I1017 19:30:58.737165  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:58.737251  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.740769  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:58.740830  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:58.768900  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:58.768920  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:58.768931  306747 cri.go:89] found id: ""
	I1017 19:30:58.768938  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:58.768991  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.773597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.777023  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:58.777094  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:58.808627  306747 cri.go:89] found id: ""
	I1017 19:30:58.808654  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.808675  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:58.808681  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:58.808778  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:58.833787  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:58.833810  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:58.833815  306747 cri.go:89] found id: ""
	I1017 19:30:58.833823  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:58.833902  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.837729  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.841076  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:58.841161  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:58.876060  306747 cri.go:89] found id: ""
	I1017 19:30:58.876089  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.876099  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:58.876107  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:58.876189  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:58.906434  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:58.906509  306747 cri.go:89] found id: ""
	I1017 19:30:58.906524  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:58.906598  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.911053  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:58.911127  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:58.936724  306747 cri.go:89] found id: ""
	I1017 19:30:58.936748  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.936757  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:58.936765  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:58.936776  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:59.014607  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:59.014643  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:59.044576  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:59.044655  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:59.124177  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:59.124211  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:59.156709  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:59.156737  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:59.175384  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:59.175413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:59.209100  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:59.209136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:59.235216  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:59.235244  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:59.337596  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:59.337631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:59.405118  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:59.396347   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.396989   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.398679   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.399208   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.400795   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:59.396347   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.396989   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.398679   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.399208   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.400795   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:59.405140  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:59.405153  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:59.431225  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:59.431255  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.008171  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:02.020307  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:02.020387  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:02.051051  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:02.051079  306747 cri.go:89] found id: ""
	I1017 19:31:02.051099  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:02.051161  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.056015  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:02.056088  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:02.089743  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.089817  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:02.089836  306747 cri.go:89] found id: ""
	I1017 19:31:02.089856  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:02.089943  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.093857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.097708  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:02.097837  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:02.123389  306747 cri.go:89] found id: ""
	I1017 19:31:02.123411  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.123420  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:02.123426  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:02.123483  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:02.150505  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:02.150582  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:02.150596  306747 cri.go:89] found id: ""
	I1017 19:31:02.150605  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:02.150681  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.154543  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.158104  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:02.158177  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:02.186868  306747 cri.go:89] found id: ""
	I1017 19:31:02.186895  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.186904  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:02.186911  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:02.186974  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:02.215359  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:02.215426  306747 cri.go:89] found id: ""
	I1017 19:31:02.215451  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:02.215524  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.219153  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:02.219266  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:02.246345  306747 cri.go:89] found id: ""
	I1017 19:31:02.246371  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.246381  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:02.246391  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:02.246402  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:02.280313  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:02.280387  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:02.385786  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:02.385822  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:02.414602  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:02.414679  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:02.492313  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:02.492350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:02.511027  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:02.511067  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:02.590723  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:02.582016   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.582767   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.584046   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.585740   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.586186   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:02.582016   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.582767   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.584046   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.585740   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.586186   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:02.590747  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:02.590762  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.653228  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:02.653264  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:02.687148  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:02.687183  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:02.790229  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:02.790269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:02.819586  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:02.819615  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.355439  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:05.367250  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:05.367353  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:05.393587  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:05.393611  306747 cri.go:89] found id: ""
	I1017 19:31:05.393620  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:05.393674  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.397564  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:05.397685  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:05.423815  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:05.423840  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:05.423845  306747 cri.go:89] found id: ""
	I1017 19:31:05.423853  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:05.423921  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.427632  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.431060  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:05.431129  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:05.457152  306747 cri.go:89] found id: ""
	I1017 19:31:05.457176  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.457186  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:05.457192  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:05.457256  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:05.483757  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:05.483779  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:05.483784  306747 cri.go:89] found id: ""
	I1017 19:31:05.483791  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:05.483845  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.487471  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.490789  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:05.490859  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:05.516653  306747 cri.go:89] found id: ""
	I1017 19:31:05.516676  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.516684  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:05.516690  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:05.516793  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:05.542033  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.542059  306747 cri.go:89] found id: ""
	I1017 19:31:05.542091  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:05.542153  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.545908  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:05.545978  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:05.571870  306747 cri.go:89] found id: ""
	I1017 19:31:05.571892  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.571901  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:05.571909  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:05.571923  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:05.649030  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:05.639899   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.640483   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642053   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642716   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.644399   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:05.639899   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.640483   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642053   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642716   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.644399   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:05.649050  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:05.649062  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:05.677036  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:05.677065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:05.718764  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:05.718795  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:05.803861  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:05.803897  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:05.835788  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:05.835814  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.864823  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:05.864853  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:05.947756  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:05.947788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:05.979938  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:05.980005  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:06.080355  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:06.080392  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:06.104116  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:06.104145  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:08.667177  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:08.677727  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:08.677793  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:08.704338  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:08.704362  306747 cri.go:89] found id: ""
	I1017 19:31:08.704370  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:08.704422  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.707981  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:08.708049  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:08.733111  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:08.733130  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:08.733135  306747 cri.go:89] found id: ""
	I1017 19:31:08.733142  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:08.733201  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.737039  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.740374  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:08.740480  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:08.768239  306747 cri.go:89] found id: ""
	I1017 19:31:08.768307  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.768338  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:08.768381  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:08.768471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:08.795436  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:08.795499  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:08.795524  306747 cri.go:89] found id: ""
	I1017 19:31:08.795537  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:08.795609  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.799450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.803242  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:08.803312  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:08.831323  306747 cri.go:89] found id: ""
	I1017 19:31:08.831348  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.831358  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:08.831364  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:08.831427  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:08.865991  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:08.866014  306747 cri.go:89] found id: ""
	I1017 19:31:08.866022  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:08.866077  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.870085  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:08.870174  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:08.905447  306747 cri.go:89] found id: ""
	I1017 19:31:08.905475  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.905483  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:08.905492  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:08.905504  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:08.988463  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:08.988574  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:09.021674  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:09.021711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:09.050080  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:09.050111  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:09.126939  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:09.126972  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:09.161551  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:09.161580  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:09.179459  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:09.179490  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:09.209038  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:09.209066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:09.271767  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:09.271810  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:09.373919  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:09.373956  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:09.439533  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:09.431442   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.432120   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.433687   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.434214   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.435793   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:09.431442   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.432120   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.433687   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.434214   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.435793   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:09.439556  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:09.439570  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:11.978816  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:11.990102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:11.990174  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:12.023196  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:12.023225  306747 cri.go:89] found id: ""
	I1017 19:31:12.023235  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:12.023302  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.027739  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:12.027832  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:12.055241  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:12.055265  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:12.055270  306747 cri.go:89] found id: ""
	I1017 19:31:12.055278  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:12.055336  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.059592  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.064052  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:12.064121  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:12.103548  306747 cri.go:89] found id: ""
	I1017 19:31:12.103575  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.103584  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:12.103591  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:12.103650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:12.131971  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:12.131995  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:12.132000  306747 cri.go:89] found id: ""
	I1017 19:31:12.132008  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:12.132063  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.136064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.139529  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:12.139597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:12.165954  306747 cri.go:89] found id: ""
	I1017 19:31:12.165977  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.165985  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:12.165991  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:12.166049  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:12.195543  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:12.195568  306747 cri.go:89] found id: ""
	I1017 19:31:12.195577  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:12.195632  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.199531  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:12.199603  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:12.225881  306747 cri.go:89] found id: ""
	I1017 19:31:12.225911  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.225920  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:12.225929  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:12.225942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:12.259524  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:12.259552  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:12.333075  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:12.333112  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:12.363221  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:12.363249  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:12.467386  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:12.467420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:12.498049  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:12.498077  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:12.577701  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:12.577736  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:12.607614  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:12.607650  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:12.637568  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:12.637597  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:12.717020  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:12.717054  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:12.740140  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:12.740170  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:12.806245  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:12.796625   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.797249   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.799733   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.800324   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.802649   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:12.796625   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.797249   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.799733   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.800324   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.802649   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:15.306473  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:15.318959  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:15.319030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:15.345727  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:15.345823  306747 cri.go:89] found id: ""
	I1017 19:31:15.345847  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:15.345935  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.349860  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:15.349937  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:15.382414  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:15.382437  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:15.382442  306747 cri.go:89] found id: ""
	I1017 19:31:15.382463  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:15.382539  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.386718  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.390470  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:15.390578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:15.417577  306747 cri.go:89] found id: ""
	I1017 19:31:15.417652  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.417668  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:15.417676  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:15.417743  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:15.445163  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:15.445206  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:15.445211  306747 cri.go:89] found id: ""
	I1017 19:31:15.445220  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:15.445305  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.450196  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.453988  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:15.454058  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:15.479623  306747 cri.go:89] found id: ""
	I1017 19:31:15.479647  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.479655  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:15.479662  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:15.479725  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:15.505913  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:15.505936  306747 cri.go:89] found id: ""
	I1017 19:31:15.505953  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:15.506007  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.509808  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:15.509881  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:15.535383  306747 cri.go:89] found id: ""
	I1017 19:31:15.535408  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.535418  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:15.535428  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:15.535440  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:15.561245  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:15.561272  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:15.622736  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:15.622771  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:15.660115  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:15.660150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:15.758501  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:15.758536  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:15.778239  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:15.778273  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:15.857887  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:15.842831   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.843942   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.845164   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.846077   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.848805   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:15.842831   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.843942   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.845164   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.846077   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.848805   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:15.857910  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:15.857926  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:15.946523  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:15.946560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:15.980219  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:15.980245  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:16.013998  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:16.014027  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:16.095391  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:16.095426  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:18.629382  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:18.642985  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:18.643054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:18.669511  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:18.669532  306747 cri.go:89] found id: ""
	I1017 19:31:18.669541  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:18.669601  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.673633  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:18.673707  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:18.702215  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:18.702239  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:18.702244  306747 cri.go:89] found id: ""
	I1017 19:31:18.702252  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:18.702331  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.709379  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.717482  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:18.717554  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:18.744246  306747 cri.go:89] found id: ""
	I1017 19:31:18.744269  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.744277  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:18.744283  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:18.744337  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:18.770169  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:18.770192  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:18.770197  306747 cri.go:89] found id: ""
	I1017 19:31:18.770205  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:18.770271  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.774060  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.777555  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:18.777624  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:18.804459  306747 cri.go:89] found id: ""
	I1017 19:31:18.804485  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.804494  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:18.804500  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:18.804582  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:18.831698  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:18.831721  306747 cri.go:89] found id: ""
	I1017 19:31:18.831730  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:18.831783  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.837132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:18.837273  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:18.870956  306747 cri.go:89] found id: ""
	I1017 19:31:18.870983  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.870992  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:18.871001  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:18.871012  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:18.986913  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:18.986950  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:19.007461  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:19.007493  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:19.035000  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:19.035029  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:19.116120  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:19.116154  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:19.146274  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:19.146303  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:19.226087  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:19.226126  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:19.274249  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:19.274285  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:19.342797  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:19.333272   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.333919   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.335774   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.336320   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.338756   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:19.333272   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.333919   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.335774   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.336320   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.338756   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:19.342824  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:19.342837  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:19.405167  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:19.405241  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:19.437359  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:19.437389  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:21.966216  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:21.977051  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:21.977124  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:22.010370  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:22.010393  306747 cri.go:89] found id: ""
	I1017 19:31:22.010401  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:22.010463  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.014786  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:22.014905  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:22.054881  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:22.054905  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:22.054910  306747 cri.go:89] found id: ""
	I1017 19:31:22.054917  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:22.054974  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.058919  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.062725  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:22.062801  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:22.092827  306747 cri.go:89] found id: ""
	I1017 19:31:22.092910  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.092926  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:22.092935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:22.093011  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:22.120574  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:22.120597  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:22.120602  306747 cri.go:89] found id: ""
	I1017 19:31:22.120609  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:22.120665  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.124579  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.128240  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:22.128314  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:22.155355  306747 cri.go:89] found id: ""
	I1017 19:31:22.155382  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.155392  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:22.155398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:22.155457  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:22.182686  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:22.182750  306747 cri.go:89] found id: ""
	I1017 19:31:22.182771  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:22.182857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.186655  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:22.186754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:22.211995  306747 cri.go:89] found id: ""
	I1017 19:31:22.212020  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.212029  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:22.212038  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:22.212080  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:22.310483  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:22.310518  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:22.376696  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:22.367517   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.368315   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370151   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370790   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.372572   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:22.367517   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.368315   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370151   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370790   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.372572   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:22.376758  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:22.376778  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:22.406493  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:22.406521  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:22.425071  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:22.425110  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:22.454385  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:22.454416  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:22.516625  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:22.516662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:22.551521  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:22.551555  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:22.645961  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:22.645999  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:22.676665  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:22.676691  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:22.757888  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:22.758011  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:25.307695  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:25.318532  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:25.318666  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:25.351844  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:25.351866  306747 cri.go:89] found id: ""
	I1017 19:31:25.351873  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:25.351936  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.355571  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:25.355637  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:25.382616  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:25.382640  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:25.382646  306747 cri.go:89] found id: ""
	I1017 19:31:25.382664  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:25.382717  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.386649  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.390174  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:25.390311  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:25.417606  306747 cri.go:89] found id: ""
	I1017 19:31:25.417630  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.417639  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:25.417645  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:25.417706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:25.445452  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:25.445475  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:25.445480  306747 cri.go:89] found id: ""
	I1017 19:31:25.445487  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:25.445541  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.449471  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.452872  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:25.452956  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:25.480615  306747 cri.go:89] found id: ""
	I1017 19:31:25.480648  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.480658  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:25.480664  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:25.480732  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:25.507575  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:25.507595  306747 cri.go:89] found id: ""
	I1017 19:31:25.507603  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:25.507669  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.512130  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:25.512199  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:25.539371  306747 cri.go:89] found id: ""
	I1017 19:31:25.539441  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.539463  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:25.539488  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:25.539527  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:25.619877  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:25.619914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:25.638042  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:25.638071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:25.677301  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:25.677335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:25.768647  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:25.768682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:25.808421  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:25.808456  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:25.833684  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:25.833709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:25.930177  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:25.930222  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:25.981992  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:25.982022  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:26.087083  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:26.087123  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:26.158486  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:26.150658   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.151278   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.152877   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.153291   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.154745   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:26.150658   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.151278   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.152877   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.153291   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.154745   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:26.158506  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:26.158519  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:28.685675  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:28.697159  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:28.697228  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:28.724197  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:28.724223  306747 cri.go:89] found id: ""
	I1017 19:31:28.724231  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:28.724294  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.728163  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:28.728249  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:28.755375  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:28.755400  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:28.755405  306747 cri.go:89] found id: ""
	I1017 19:31:28.755413  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:28.755465  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.759475  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.762827  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:28.762901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:28.788123  306747 cri.go:89] found id: ""
	I1017 19:31:28.788150  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.788159  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:28.788165  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:28.788221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:28.818579  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:28.818611  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:28.818617  306747 cri.go:89] found id: ""
	I1017 19:31:28.818624  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:28.818677  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.822375  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.825827  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:28.825901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:28.856344  306747 cri.go:89] found id: ""
	I1017 19:31:28.856371  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.856379  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:28.856386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:28.856456  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:28.883877  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:28.883901  306747 cri.go:89] found id: ""
	I1017 19:31:28.883909  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:28.883969  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.890405  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:28.890482  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:28.919970  306747 cri.go:89] found id: ""
	I1017 19:31:28.919997  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.920007  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:28.920016  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:28.920028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:28.938590  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:28.938619  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:29.012463  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:29.012502  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:29.051714  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:29.051751  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:29.139864  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:29.139904  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:29.167130  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:29.167157  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:29.244122  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:29.244163  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:29.289243  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:29.289271  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:29.365219  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:29.356772   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.357390   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.358919   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.359407   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.360893   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:29.356772   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.357390   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.358919   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.359407   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.360893   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:29.365246  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:29.365260  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:29.391983  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:29.392013  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:29.418030  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:29.418136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:32.016682  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:32.027928  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:32.028056  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:32.057743  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:32.057770  306747 cri.go:89] found id: ""
	I1017 19:31:32.057779  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:32.057832  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.062215  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:32.062350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:32.096282  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:32.096359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:32.096379  306747 cri.go:89] found id: ""
	I1017 19:31:32.096402  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:32.096490  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.100272  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.104020  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:32.104094  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:32.130658  306747 cri.go:89] found id: ""
	I1017 19:31:32.130684  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.130692  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:32.130698  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:32.130785  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:32.158436  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:32.158459  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:32.158464  306747 cri.go:89] found id: ""
	I1017 19:31:32.158472  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:32.158524  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.162501  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.165977  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:32.166093  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:32.192337  306747 cri.go:89] found id: ""
	I1017 19:31:32.192414  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.192438  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:32.192460  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:32.192566  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:32.224591  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:32.224625  306747 cri.go:89] found id: ""
	I1017 19:31:32.224643  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:32.224699  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.228992  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:32.229114  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:32.263902  306747 cri.go:89] found id: ""
	I1017 19:31:32.263936  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.263945  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:32.263954  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:32.263970  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:32.331346  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:32.321358   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.322175   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325150   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325743   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.327508   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:32.321358   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.322175   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325150   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325743   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.327508   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:32.331370  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:32.331383  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:32.358344  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:32.358372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:32.419310  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:32.419347  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:32.462060  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:32.462091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:32.543672  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:32.543709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:32.572300  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:32.572327  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:32.650752  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:32.650785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:32.687208  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:32.687239  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:32.785332  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:32.785370  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:32.804237  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:32.804272  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:35.336200  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:35.351300  306747 out.go:203] 
	W1017 19:31:35.354294  306747 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1017 19:31:35.354331  306747 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1017 19:31:35.354341  306747 out.go:285] * Related issues:
	W1017 19:31:35.354355  306747 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1017 19:31:35.354368  306747 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1017 19:31:35.357325  306747 out.go:203] 
	
	
	==> CRI-O <==
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.336555027Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.33658308Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.339801184Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.339831682Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.953037254Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=202e1d64-912a-476c-ba5a-77b37bc42979 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.953839727Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6205eb3f-5cb1-4748-8710-0ffe69b4490c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.955014194Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-254035/kube-controller-manager" id=081f7878-c585-4466-b2db-1bae5c6893ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.955225536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.961488794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.962588933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.983518924Z" level=info msg="Created container 09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4: kube-system/kube-controller-manager-ha-254035/kube-controller-manager" id=081f7878-c585-4466-b2db-1bae5c6893ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.984251327Z" level=info msg="Starting container: 09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4" id=0d55a9d8-f1b5-40f1-8bd6-984aab4be84b name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.987082086Z" level=info msg="Started container" PID=1467 containerID=09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4 description=kube-system/kube-controller-manager-ha-254035/kube-controller-manager id=0d55a9d8-f1b5-40f1-8bd6-984aab4be84b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee9f2d44d32377576c274975d42c83c6d10327b8cf9c78d24d11e2f783796a0e
	Oct 17 19:26:29 ha-254035 conmon[1199]: conmon f662d4e90719bc39bd00 <ninfo>: container 1202 exited with status 1
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.433901954Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f8df12f8-0980-4df8-b1a9-6ee17b7f8ffd name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.435915053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ba31ec85-e31e-4fc3-9dcf-e12b08bd6e71 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.441058833Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f9d9837c-aba3-4e03-853d-b95f80acea4f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.441479975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.45712493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.457473179Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fdd046ea9be9a16a63c03510b49257ec82013029fd6bc07010444052d640f8f0/merged/etc/passwd: no such file or directory"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.457519947Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fdd046ea9be9a16a63c03510b49257ec82013029fd6bc07010444052d640f8f0/merged/etc/group: no such file or directory"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.457904732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.498042086Z" level=info msg="Created container faca00e9a381032f2a2a1ca361d6f8261cbb527f61722910f84bf86e69627f22: kube-system/storage-provisioner/storage-provisioner" id=f9d9837c-aba3-4e03-853d-b95f80acea4f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.499778687Z" level=info msg="Starting container: faca00e9a381032f2a2a1ca361d6f8261cbb527f61722910f84bf86e69627f22" id=14304d27-6de8-4811-9a66-8c4d47f3188f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.503194694Z" level=info msg="Started container" PID=1483 containerID=faca00e9a381032f2a2a1ca361d6f8261cbb527f61722910f84bf86e69627f22 description=kube-system/storage-provisioner/storage-provisioner id=14304d27-6de8-4811-9a66-8c4d47f3188f name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2cae7d5aa8d4e785124a213f6c2cc39a98e7313513ec9ea001c05e6360e2f93
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	faca00e9a3810       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       2                   c2cae7d5aa8d4       storage-provisioner                 kube-system
	09b363cd1ecad       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   5                   ee9f2d44d3237       kube-controller-manager-ha-254035   kube-system
	576cfa798259d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               1                   70bac1a7c5264       kindnet-gzzsg                       kube-system
	9ee89513ed12a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   9b9434e716ce6       coredns-66bc5c9577-wbgc8            kube-system
	758a5862ad867       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   be0fe8edcd6ba       busybox-7b57f96db7-nc6x2            default
	c52f3d12f85be       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                1                   e47d5acf8c94c       kube-proxy-548b2                    kube-system
	f662d4e90719b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   c2cae7d5aa8d4       storage-provisioner                 kube-system
	8edb27c8d6015       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   269b656ae24bb       coredns-66bc5c9577-gfklr            kube-system
	8f2e18695e457       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   4                   ee9f2d44d3237       kube-controller-manager-ha-254035   kube-system
	26c8280f98ef8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            2                   5952fd9040500       kube-apiserver-ha-254035            kube-system
	a9f69dd8228df       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            1                   9e4e211817dbb       kube-scheduler-ha-254035            kube-system
	2dc181e1d75c1       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  0                   75776cf83b5c8       kube-vip-ha-254035                  kube-system
	99ffff8c4838d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      1                   d1536a316aa1d       etcd-ha-254035                      kube-system
	b745cb636fe8e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   7 minutes ago       Exited              kube-apiserver            1                   5952fd9040500       kube-apiserver-ha-254035            kube-system
	
	
	==> coredns [8edb27c8d6015a43dc1b4fd9d8f695495a303a3c83de005f1197b1c1420e5d7e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58119 - 23158 "HINFO IN 703179826096282682.4600017575089700098. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025326139s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [9ee89513ed12a83eea9b477aadcc58ed9f5e2d62a017cd43bad27b1118f04b45] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59051 - 49005 "HINFO IN 2456025369292059622.4845573965486641381. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018045022s
	
	
	==> describe nodes <==
	Name:               ha-254035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_17_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:31:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:18:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-254035
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                eadb5c5f-dcbb-485c-aea7-3aa5b951fd9e
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-nc6x2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-gfklr             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-wbgc8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-254035                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-gzzsg                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-254035             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-254035    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-548b2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-254035             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-254035                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m39s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-254035 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           8m26s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientMemory  7m46s (x8 over 7m47s)  kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m46s (x8 over 7m47s)  kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m46s (x8 over 7m47s)  kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m7s                   node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	
	
	Name:               ha-254035-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_18_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:18:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:23:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-254035-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6c5e97e0-fa27-407d-a976-b646e8a40ca5
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6xjlp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-254035-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-vss98                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-254035-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-254035-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-b4fr6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-254035-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-254035-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 8m23s                kube-proxy       
	  Normal   Starting                 12m                  kube-proxy       
	  Normal   RegisteredNode           12m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           12m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   Starting                 9m4s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m4s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m3s (x8 over 9m4s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     9m3s (x8 over 9m4s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m3s (x8 over 9m4s)  kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             8m31s                node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           8m26s                node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           5m7s                 node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   NodeNotReady             4m17s                node-controller  Node ha-254035-m02 status is now: NodeNotReady
	
	
	Name:               ha-254035-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_20_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:19:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:23:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-254035-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2f343c58-0cc9-444a-bc88-7799c3ff52df
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-979zm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-254035-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-2k9kj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-254035-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-254035-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-k56cv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-254035-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-254035-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        11m    kube-proxy       
	  Normal  RegisteredNode  11m    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  8m26s  node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  5m7s   node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  NodeNotReady    4m17s  node-controller  Node ha-254035-m03 status is now: NodeNotReady
	
	
	Name:               ha-254035-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_21_16_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:21:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:22:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-254035-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                12691412-a8b5-426e-846e-d6161e527ea6
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pwhwv       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-fr5ts    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeReady                9m41s              kubelet          Node ha-254035-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m26s              node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           5m7s               node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeNotReady             4m17s              node-controller  Node ha-254035-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct17 18:30] overlayfs: idmapped layers are currently not supported
	[Oct17 18:31] overlayfs: idmapped layers are currently not supported
	[  +9.357480] overlayfs: idmapped layers are currently not supported
	[Oct17 18:33] overlayfs: idmapped layers are currently not supported
	[  +5.779853] overlayfs: idmapped layers are currently not supported
	[Oct17 18:34] overlayfs: idmapped layers are currently not supported
	[Oct17 18:35] overlayfs: idmapped layers are currently not supported
	[Oct17 18:36] overlayfs: idmapped layers are currently not supported
	[ +20.850590] overlayfs: idmapped layers are currently not supported
	[Oct17 18:38] overlayfs: idmapped layers are currently not supported
	[ +19.812679] overlayfs: idmapped layers are currently not supported
	[Oct17 18:39] overlayfs: idmapped layers are currently not supported
	[ +19.225178] overlayfs: idmapped layers are currently not supported
	[Oct17 18:40] overlayfs: idmapped layers are currently not supported
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 18:57] overlayfs: idmapped layers are currently not supported
	[Oct17 19:03] overlayfs: idmapped layers are currently not supported
	[Oct17 19:04] overlayfs: idmapped layers are currently not supported
	[Oct17 19:17] overlayfs: idmapped layers are currently not supported
	[Oct17 19:18] overlayfs: idmapped layers are currently not supported
	[Oct17 19:19] overlayfs: idmapped layers are currently not supported
	[Oct17 19:21] overlayfs: idmapped layers are currently not supported
	[Oct17 19:22] overlayfs: idmapped layers are currently not supported
	[Oct17 19:23] overlayfs: idmapped layers are currently not supported
	[  +4.119232] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [99ffff8c4838d302fd86aa2def104fc0bc5a061a4b4b00a66b6659be26e84f94] <==
	{"level":"warn","ts":"2025-10-17T19:31:38.583242Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.645690Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.681796Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.682509Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.732956Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.740102Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.752111Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.762571Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.777462Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.782627Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.785546Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.788284Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.793579Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.797042Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.804296Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.823216Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.829310Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.832466Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.836376Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.843161Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.854493Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.857031Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.860095Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.860428Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:38.882539Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:31:38 up  2:14,  0 user,  load average: 1.05, 1.21, 1.24
	Linux ha-254035 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [576cfa798259d8160ac05728f7d414a328778671800ac5aa4b4d45bfd6b32ca7] <==
	I1017 19:31:02.311185       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:31:12.316594       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:31:12.316633       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:31:12.316778       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:31:12.316785       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:31:12.316830       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:31:12.316835       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:31:12.316878       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:31:12.316884       1 main.go:301] handling current node
	I1017 19:31:22.316591       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:31:22.316724       1 main.go:301] handling current node
	I1017 19:31:22.316765       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:31:22.316799       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:31:22.316958       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:31:22.316999       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:31:22.317085       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:31:22.317118       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:31:32.318786       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:31:32.318883       1 main.go:301] handling current node
	I1017 19:31:32.318923       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:31:32.318956       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:31:32.319124       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:31:32.319162       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:31:32.319267       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:31:32.319300       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [26c8280f98ef8d0b35d3d3f933f908e0be045364d9887ae7338e14fc4e4385e4] <==
	I1017 19:25:41.080327       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:25:41.096711       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 19:25:41.096824       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:25:41.097844       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:25:41.175963       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:25:41.240687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:25:41.270984       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1017 19:25:41.278063       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1017 19:25:41.280292       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:25:41.288893       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:25:41.289028       1 policy_source.go:240] refreshing policies
	I1017 19:25:41.289185       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:25:41.331450       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:25:41.383818       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:25:41.406733       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1017 19:25:41.413308       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1017 19:25:45.477912       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 19:25:45.579324       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:25:45.579417       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1017 19:25:46.424106       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1017 19:25:47.046652       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1017 19:26:06.426319       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1017 19:27:22.125956       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:27:22.236976       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:27:22.377213       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [b745cb636fe8e12797dbad3808d1af04aa579d4fbd2ba8ac91052e88e1d9594d] <==
	{"level":"warn","ts":"2025-10-17T19:24:55.662540Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662541Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662657Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662764Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fad20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662902Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fad20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663035Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400253bc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663152Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400253bc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663213Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663271Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40011003c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663383Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.664911Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665014Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665142Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665183Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026141e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665234Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026141e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665283Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002615680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665351Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002b00960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665456Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027650e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662006Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014c32c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:25:01.465860Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1017 19:25:01.465976       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1017 19:25:01.466227       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-254035?timeout=10s" auditID="46bb9fa1-62e8-45b2-afdf-459f2b875119"
	E1017 19:25:01.466249       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.626µs" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-254035" result=null
	F1017 19:25:02.365194       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-10-17T19:25:02.527979Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	
	
	==> kube-controller-manager [09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4] <==
	I1017 19:26:31.955042       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:26:31.955150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:26:31.955182       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:26:31.960320       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 19:26:31.964011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:26:31.973631       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:26:31.974067       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:26:31.974279       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:26:31.974994       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:26:31.975207       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:26:31.975822       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:26:31.976008       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 19:26:31.976066       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:26:31.976280       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:26:31.977778       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:26:31.982328       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:26:31.982451       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	I1017 19:26:31.985705       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:31.985877       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 19:26:31.996213       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:26:31.999311       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:32.005595       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:26:32.011326       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:26:32.011373       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:27:22.463777       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	
	
	==> kube-controller-manager [8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a] <==
	I1017 19:25:26.963428       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:25:27.847264       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 19:25:27.847300       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:25:27.848875       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 19:25:27.849078       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1017 19:25:27.849285       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 19:25:27.849330       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 19:25:37.867683       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [c52f3d12f85be9ad9f0f95f3255def1ee473db156fc0776fb80fa92aad03d8c3] <==
	I1017 19:25:59.103590       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:25:59.177968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:25:59.279067       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:25:59.279103       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:25:59.279223       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:25:59.297489       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:25:59.297617       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:25:59.301231       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:25:59.301529       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:25:59.301552       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:25:59.305385       1 config.go:200] "Starting service config controller"
	I1017 19:25:59.305486       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:25:59.305654       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:25:59.305943       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:25:59.306000       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:25:59.306196       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:25:59.307366       1 config.go:309] "Starting node config controller"
	I1017 19:25:59.311349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:25:59.311421       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:25:59.405715       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:25:59.406183       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:25:59.406288       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a9f69dd8228df806b3caf0a6a77814b3035f6624474afd789ff17d36b93becbb] <==
	E1017 19:24:43.700780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:24:44.750268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:24:46.554973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:24:47.376765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 19:24:47.902102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:25:06.878063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:25:07.212761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:25:12.280794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:25:12.456185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:25:13.739609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:25:14.975535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:25:16.328928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:25:18.380682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:25:20.375603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:25:21.123675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:25:21.517709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:25:21.932068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:25:22.080795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:25:22.270841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:25:25.020718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:25:25.490826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:25:28.981572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:25:29.683639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 19:25:35.763654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1017 19:26:13.713049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.312257     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-gfklr_kube-system(8bf2b43b-91c9-4531-a571-36060412860e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.312386     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-gfklr" podUID="8bf2b43b-91c9-4531-a571-36060412860e"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.317109     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.317252     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.319138     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.319272     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:25:47 ha-254035 kubelet[795]: I1017 19:25:47.321488     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.321734     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.322802     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-548b2_kube-system(4b772887-90df-4871-9343-69349bdda859): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.322858     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-548b2" podUID="4b772887-90df-4871-9343-69349bdda859"
	Oct 17 19:25:47 ha-254035 kubelet[795]: I1017 19:25:47.952228     795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f120554cc7e7eb74e29c79f31815613" path="/var/lib/kubelet/pods/4f120554cc7e7eb74e29c79f31815613/volumes"
	Oct 17 19:25:48 ha-254035 kubelet[795]: I1017 19:25:48.323043     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:25:48 ha-254035 kubelet[795]: E1017 19:25:48.323207     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.831559     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f\": container with ID starting with 03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f not found: ID does not exist" containerID="03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f"
	Oct 17 19:25:51 ha-254035 kubelet[795]: I1017 19:25:51.831609     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f" err="rpc error: code = NotFound desc = could not find container \"03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f\": container with ID starting with 03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f not found: ID does not exist"
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.832065     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f\": container with ID starting with 37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f not found: ID does not exist" containerID="37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f"
	Oct 17 19:25:51 ha-254035 kubelet[795]: I1017 19:25:51.832099     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f" err="rpc error: code = NotFound desc = could not find container \"37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f\": container with ID starting with 37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f not found: ID does not exist"
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.918773     795 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a4e6e217ea695149c5a154bbecbc7798ca28f6ae40caa311c266f47def107466/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a4e6e217ea695149c5a154bbecbc7798ca28f6ae40caa311c266f47def107466/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/3.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/3.log: no such file or directory
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.921773     795 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/880b7d2432f854b1d2e4221c38cbcfa637187b519d26b99deb22f9bb126c2b9f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/880b7d2432f854b1d2e4221c38cbcfa637187b519d26b99deb22f9bb126c2b9f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/2.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/2.log: no such file or directory
	Oct 17 19:25:59 ha-254035 kubelet[795]: I1017 19:25:59.951449     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:25:59 ha-254035 kubelet[795]: E1017 19:25:59.951658     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:26:14 ha-254035 kubelet[795]: I1017 19:26:14.950613     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:26:14 ha-254035 kubelet[795]: E1017 19:26:14.950806     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:26:27 ha-254035 kubelet[795]: I1017 19:26:27.952669     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:26:29 ha-254035 kubelet[795]: I1017 19:26:29.433310     795 scope.go:117] "RemoveContainer" containerID="f662d4e90719bc39bd008b62c1cbb5dd8676a08edeef61897f3e68749b418ff7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-254035 -n ha-254035
helpers_test.go:269: (dbg) Run:  kubectl --context ha-254035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (514.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (5.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-254035 node delete m03 --alsologtostderr -v 5: exit status 83 (181.645041ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-254035-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-254035"

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:31:41.138410  323199 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:31:41.139923  323199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:31:41.139964  323199 out.go:374] Setting ErrFile to fd 2...
	I1017 19:31:41.139990  323199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:31:41.140326  323199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:31:41.140767  323199 mustload.go:65] Loading cluster: ha-254035
	I1017 19:31:41.141264  323199 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:31:41.141787  323199 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:31:41.161251  323199 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:31:41.161566  323199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:31:41.219660  323199 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 19:31:41.209099184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:31:41.220060  323199 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:31:41.237642  323199 host.go:66] Checking if "ha-254035-m02" exists ...
	I1017 19:31:41.238154  323199 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:31:41.260659  323199 out.go:179] * The control-plane node ha-254035-m03 host is not running: state=Stopped
	I1017 19:31:41.263555  323199 out.go:179]   To start a cluster, run: "minikube start -p ha-254035"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-arm64 -p ha-254035 node delete m03 --alsologtostderr -v 5": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5: exit status 7 (564.88294ms)

                                                
                                                
-- stdout --
	ha-254035
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-254035-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-254035-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-254035-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:31:41.329000  323249 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:31:41.329182  323249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:31:41.329217  323249 out.go:374] Setting ErrFile to fd 2...
	I1017 19:31:41.329236  323249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:31:41.329531  323249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:31:41.329872  323249 out.go:368] Setting JSON to false
	I1017 19:31:41.329936  323249 mustload.go:65] Loading cluster: ha-254035
	I1017 19:31:41.330012  323249 notify.go:220] Checking for updates...
	I1017 19:31:41.331029  323249 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:31:41.331085  323249 status.go:174] checking status of ha-254035 ...
	I1017 19:31:41.331626  323249 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:31:41.355545  323249 status.go:371] ha-254035 host status = "Running" (err=<nil>)
	I1017 19:31:41.355567  323249 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:31:41.355852  323249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:31:41.381228  323249 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:31:41.381527  323249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:31:41.381574  323249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:31:41.398704  323249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:31:41.502498  323249 ssh_runner.go:195] Run: systemctl --version
	I1017 19:31:41.509661  323249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:31:41.522955  323249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:31:41.576837  323249 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 19:31:41.566421384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:31:41.577375  323249 kubeconfig.go:125] found "ha-254035" server: "https://192.168.49.254:8443"
	I1017 19:31:41.577413  323249 api_server.go:166] Checking apiserver status ...
	I1017 19:31:41.577463  323249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:41.589356  323249 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1066/cgroup
	I1017 19:31:41.598086  323249 api_server.go:182] apiserver freezer: "2:freezer:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio/crio-26c8280f98ef8d0b35d3d3f933f908e0be045364d9887ae7338e14fc4e4385e4"
	I1017 19:31:41.598158  323249 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio/crio-26c8280f98ef8d0b35d3d3f933f908e0be045364d9887ae7338e14fc4e4385e4/freezer.state
	I1017 19:31:41.605485  323249 api_server.go:204] freezer state: "THAWED"
	I1017 19:31:41.605515  323249 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 19:31:41.613733  323249 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 19:31:41.613761  323249 status.go:463] ha-254035 apiserver status = Running (err=<nil>)
	I1017 19:31:41.613773  323249 status.go:176] ha-254035 status: &{Name:ha-254035 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:31:41.613790  323249 status.go:174] checking status of ha-254035-m02 ...
	I1017 19:31:41.614107  323249 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:31:41.631104  323249 status.go:371] ha-254035-m02 host status = "Running" (err=<nil>)
	I1017 19:31:41.631132  323249 host.go:66] Checking if "ha-254035-m02" exists ...
	I1017 19:31:41.631435  323249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:31:41.649055  323249 host.go:66] Checking if "ha-254035-m02" exists ...
	I1017 19:31:41.649354  323249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:31:41.649407  323249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:31:41.667481  323249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:31:41.769798  323249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:31:41.782623  323249 kubeconfig.go:125] found "ha-254035" server: "https://192.168.49.254:8443"
	I1017 19:31:41.782651  323249 api_server.go:166] Checking apiserver status ...
	I1017 19:31:41.782695  323249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1017 19:31:41.792933  323249 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:31:41.793008  323249 status.go:463] ha-254035-m02 apiserver status = Running (err=<nil>)
	I1017 19:31:41.793052  323249 status.go:176] ha-254035-m02 status: &{Name:ha-254035-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:31:41.793084  323249 status.go:174] checking status of ha-254035-m03 ...
	I1017 19:31:41.793408  323249 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:31:41.810828  323249 status.go:371] ha-254035-m03 host status = "Stopped" (err=<nil>)
	I1017 19:31:41.810865  323249 status.go:384] host is not running, skipping remaining checks
	I1017 19:31:41.810872  323249 status.go:176] ha-254035-m03 status: &{Name:ha-254035-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:31:41.810893  323249 status.go:174] checking status of ha-254035-m04 ...
	I1017 19:31:41.811176  323249 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:31:41.827257  323249 status.go:371] ha-254035-m04 host status = "Stopped" (err=<nil>)
	I1017 19:31:41.827281  323249 status.go:384] host is not running, skipping remaining checks
	I1017 19:31:41.827287  323249 status.go:176] ha-254035-m04 status: &{Name:ha-254035-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-254035
helpers_test.go:243: (dbg) docker inspect ha-254035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	        "Created": "2025-10-17T19:17:36.603472481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306876,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:23:44.340324163Z",
	            "FinishedAt": "2025-10-17T19:23:43.760876929Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hosts",
	        "LogPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8-json.log",
	        "Name": "/ha-254035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-254035:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-254035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	                "LowerDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-254035",
	                "Source": "/var/lib/docker/volumes/ha-254035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-254035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-254035",
	                "name.minikube.sigs.k8s.io": "ha-254035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d0adb3a8a6f2813284c8f1a167175cc89dcd4664a3ffc878d2459fa2b4bea6d1",
	            "SandboxKey": "/var/run/docker/netns/d0adb3a8a6f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-254035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f1:6c:59:90:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f667d9c3ea201faa6573d33bffc4907012785051d424eb86a31b1e09eb8b135",
	                    "EndpointID": "daecfb65c2dbfda1e321a7412bf642ac1f3e72c152f9f670fa4c977e6a8f5b74",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-254035",
	                        "7f770318d5dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-254035 -n ha-254035
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 logs -n 25: (2.16344073s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m02 sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m02.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m03:/home/docker/cp-test.txt ha-254035-m04:/home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp testdata/cp-test.txt ha-254035-m04:/home/docker/cp-test.txt                                                             │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035-m04.txt │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035:/home/docker/cp-test_ha-254035-m04_ha-254035.txt                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035.txt                                                 │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m02 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m03:/home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node start m02 --alsologtostderr -v 5                                                                                      │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:23 UTC │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │ 17 Oct 25 19:23 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5                                                                                   │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	│ node    │ ha-254035 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:23:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:23:44.078300  306747 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:23:44.078421  306747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:23:44.078432  306747 out.go:374] Setting ErrFile to fd 2...
	I1017 19:23:44.078438  306747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:23:44.078707  306747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:23:44.079081  306747 out.go:368] Setting JSON to false
	I1017 19:23:44.079937  306747 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7575,"bootTime":1760721449,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:23:44.080008  306747 start.go:141] virtualization:  
	I1017 19:23:44.083220  306747 out.go:179] * [ha-254035] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:23:44.087049  306747 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:23:44.087156  306747 notify.go:220] Checking for updates...
	I1017 19:23:44.093223  306747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:23:44.096040  306747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:44.098900  306747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:23:44.101720  306747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:23:44.104684  306747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:23:44.108337  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:44.108506  306747 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:23:44.135326  306747 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:23:44.135444  306747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:23:44.192131  306747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:23:44.183230595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:23:44.192236  306747 docker.go:318] overlay module found
	I1017 19:23:44.195310  306747 out.go:179] * Using the docker driver based on existing profile
	I1017 19:23:44.198085  306747 start.go:305] selected driver: docker
	I1017 19:23:44.198103  306747 start.go:925] validating driver "docker" against &{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:44.198244  306747 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:23:44.198355  306747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:23:44.253333  306747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:23:44.243935529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:23:44.253792  306747 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:23:44.253819  306747 cni.go:84] Creating CNI manager for ""
	I1017 19:23:44.253877  306747 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:23:44.253928  306747 start.go:349] cluster config:
	{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:44.258934  306747 out.go:179] * Starting "ha-254035" primary control-plane node in "ha-254035" cluster
	I1017 19:23:44.261731  306747 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:23:44.264643  306747 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:23:44.267316  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:44.267375  306747 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 19:23:44.267392  306747 cache.go:58] Caching tarball of preloaded images
	I1017 19:23:44.267402  306747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:23:44.267494  306747 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:23:44.267505  306747 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:23:44.267648  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:44.287307  306747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:23:44.287328  306747 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:23:44.287345  306747 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:23:44.287367  306747 start.go:360] acquireMachinesLock for ha-254035: {Name:mka2e39989b9cf6078778e7f6519885462ea711f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:23:44.287430  306747 start.go:364] duration metric: took 44.061µs to acquireMachinesLock for "ha-254035"
	I1017 19:23:44.287455  306747 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:23:44.287461  306747 fix.go:54] fixHost starting: 
	I1017 19:23:44.287734  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:23:44.304208  306747 fix.go:112] recreateIfNeeded on ha-254035: state=Stopped err=<nil>
	W1017 19:23:44.304236  306747 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:23:44.307544  306747 out.go:252] * Restarting existing docker container for "ha-254035" ...
	I1017 19:23:44.307642  306747 cli_runner.go:164] Run: docker start ha-254035
	I1017 19:23:44.557261  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:23:44.582382  306747 kic.go:430] container "ha-254035" state is running.
	I1017 19:23:44.582813  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:44.609625  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:44.609882  306747 machine.go:93] provisionDockerMachine start ...
	I1017 19:23:44.609944  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:44.630467  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:44.634045  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:44.634070  306747 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:23:44.634815  306747 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:23:47.792030  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:23:47.792065  306747 ubuntu.go:182] provisioning hostname "ha-254035"
	I1017 19:23:47.792127  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:47.809622  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:47.809936  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:47.809952  306747 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035 && echo "ha-254035" | sudo tee /etc/hostname
	I1017 19:23:47.965159  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:23:47.965243  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:47.983936  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:47.984247  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:47.984262  306747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:23:48.140890  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:23:48.140965  306747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:23:48.140998  306747 ubuntu.go:190] setting up certificates
	I1017 19:23:48.141008  306747 provision.go:84] configureAuth start
	I1017 19:23:48.141069  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:48.158600  306747 provision.go:143] copyHostCerts
	I1017 19:23:48.158645  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:48.158680  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:23:48.158692  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:48.158773  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:23:48.158860  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:48.158883  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:23:48.158892  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:48.158921  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:23:48.158969  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:48.158990  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:23:48.158998  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:48.159024  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:23:48.159076  306747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035 san=[127.0.0.1 192.168.49.2 ha-254035 localhost minikube]
	I1017 19:23:49.196726  306747 provision.go:177] copyRemoteCerts
	I1017 19:23:49.196790  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:23:49.196831  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.213909  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.316345  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:23:49.316405  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:23:49.333689  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:23:49.333750  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1017 19:23:49.350869  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:23:49.350938  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:23:49.369234  306747 provision.go:87] duration metric: took 1.228212253s to configureAuth
	I1017 19:23:49.369303  306747 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:23:49.369552  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:49.369665  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.386704  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:49.387020  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:49.387042  306747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:23:49.707607  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:23:49.707692  306747 machine.go:96] duration metric: took 5.097783711s to provisionDockerMachine
	I1017 19:23:49.707720  306747 start.go:293] postStartSetup for "ha-254035" (driver="docker")
	I1017 19:23:49.707762  306747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:23:49.707871  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:23:49.707943  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.732798  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.836574  306747 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:23:49.839984  306747 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:23:49.840010  306747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:23:49.840021  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:23:49.840085  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:23:49.840181  306747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:23:49.840196  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:23:49.840298  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:23:49.847846  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:49.865445  306747 start.go:296] duration metric: took 157.679358ms for postStartSetup
	I1017 19:23:49.865569  306747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:23:49.865624  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.889188  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.989662  306747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:23:49.994825  306747 fix.go:56] duration metric: took 5.707355296s for fixHost
	I1017 19:23:49.994852  306747 start.go:83] releasing machines lock for "ha-254035", held for 5.707408965s
	I1017 19:23:49.994927  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:50.015297  306747 ssh_runner.go:195] Run: cat /version.json
	I1017 19:23:50.015360  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:50.015301  306747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:23:50.015521  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:50.036378  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:50.050179  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:50.238257  306747 ssh_runner.go:195] Run: systemctl --version
	I1017 19:23:50.244735  306747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:23:50.281650  306747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:23:50.286151  306747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:23:50.286279  306747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:23:50.294085  306747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:23:50.294116  306747 start.go:495] detecting cgroup driver to use...
	I1017 19:23:50.294156  306747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:23:50.294238  306747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:23:50.309600  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:23:50.322860  306747 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:23:50.322932  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:23:50.338234  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:23:50.351355  306747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:23:50.467572  306747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:23:50.583217  306747 docker.go:234] disabling docker service ...
	I1017 19:23:50.583338  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:23:50.598924  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:23:50.611975  306747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:23:50.724286  306747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:23:50.847044  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:23:50.859364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:23:50.873503  306747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:23:50.873573  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.882985  306747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:23:50.883056  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.892747  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.902591  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.911060  306747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:23:50.919007  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.928031  306747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.936934  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.945620  306747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:23:50.953208  306747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:23:50.960459  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:23:51.085184  306747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:23:51.215570  306747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:23:51.215643  306747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:23:51.219416  306747 start.go:563] Will wait 60s for crictl version
	I1017 19:23:51.219481  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:23:51.222932  306747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:23:51.247803  306747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:23:51.247951  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:23:51.276815  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:23:51.309138  306747 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:23:51.311805  306747 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:23:51.327519  306747 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:23:51.331666  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:23:51.341689  306747 kubeadm.go:883] updating cluster {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:23:51.341851  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:51.341916  306747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:23:51.379317  306747 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:23:51.379341  306747 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:23:51.379396  306747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:23:51.405884  306747 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:23:51.405906  306747 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:23:51.405918  306747 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:23:51.406057  306747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:23:51.406155  306747 ssh_runner.go:195] Run: crio config
	I1017 19:23:51.475467  306747 cni.go:84] Creating CNI manager for ""
	I1017 19:23:51.475497  306747 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:23:51.475520  306747 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:23:51.475544  306747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-254035 NodeName:ha-254035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:23:51.475670  306747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-254035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:23:51.475693  306747 kube-vip.go:115] generating kube-vip config ...
	I1017 19:23:51.475756  306747 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:23:51.487989  306747 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:23:51.488119  306747 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:23:51.488198  306747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:23:51.496044  306747 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:23:51.496117  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 19:23:51.503891  306747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 19:23:51.517028  306747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:23:51.530699  306747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 19:23:51.544563  306747 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:23:51.557994  306747 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:23:51.561600  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:23:51.571313  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:23:51.690597  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:23:51.707379  306747 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.2
	I1017 19:23:51.707451  306747 certs.go:195] generating shared ca certs ...
	I1017 19:23:51.707483  306747 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:51.707678  306747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:23:51.707765  306747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:23:51.707807  306747 certs.go:257] generating profile certs ...
	I1017 19:23:51.707925  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:23:51.707978  306747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea
	I1017 19:23:51.708011  306747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1017 19:23:52.143690  306747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea ...
	I1017 19:23:52.143724  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea: {Name:mk84072e95c642d9de97a7b2d7684c1b2411f2c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:52.143929  306747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea ...
	I1017 19:23:52.143944  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea: {Name:mk1e13a21ca5f9f77c2e8e2d4f37d2c902696b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:52.144031  306747 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt
	I1017 19:23:52.144173  306747 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key
	I1017 19:23:52.144307  306747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:23:52.144326  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:23:52.144342  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:23:52.144362  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:23:52.144377  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:23:52.144396  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:23:52.144419  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:23:52.144435  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:23:52.144450  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:23:52.144501  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:23:52.144555  306747 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:23:52.144570  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:23:52.144594  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:23:52.144621  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:23:52.144646  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:23:52.144696  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:52.144726  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.144744  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.144760  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.145349  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:23:52.164836  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:23:52.182173  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:23:52.200320  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:23:52.220031  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:23:52.239993  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:23:52.259787  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:23:52.278396  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:23:52.296286  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:23:52.313979  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:23:52.331810  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:23:52.349798  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:23:52.364237  306747 ssh_runner.go:195] Run: openssl version
	I1017 19:23:52.376391  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:23:52.385410  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.389746  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.389837  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.434948  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:23:52.443397  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:23:52.452268  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.460529  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.460626  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.518909  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:23:52.528730  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:23:52.541129  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.545573  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.545658  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.629233  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:23:52.650967  306747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:23:52.657469  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:23:52.741430  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:23:52.801484  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:23:52.855613  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:23:52.911294  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:23:52.960715  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:23:53.023389  306747 kubeadm.go:400] StartCluster: {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:53.023526  306747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:23:53.023593  306747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:23:53.070982  306747 cri.go:89] found id: "a9f69dd8228df806b3caf0a6a77814b3035f6624474afd789ff17d36b93becbb"
	I1017 19:23:53.071006  306747 cri.go:89] found id: "2dc181e1d75c199e1d878c25f6b4eb381f5134e5e8ff6ed9deea02322d7cdf4c"
	I1017 19:23:53.071011  306747 cri.go:89] found id: "6fb4bcbcf5815899f9ed7e0ee3f40ae912c24131eda2482a13e66f3bf9211953"
	I1017 19:23:53.071015  306747 cri.go:89] found id: "99ffff8c4838d302fd86aa2def104fc0bc5a061a4b4b00a66b6659be26e84f94"
	I1017 19:23:53.071018  306747 cri.go:89] found id: "b745cb636fe8e12797dbad3808d1af04aa579d4fbd2ba8ac91052e88e1d9594d"
	I1017 19:23:53.071022  306747 cri.go:89] found id: ""
	I1017 19:23:53.071070  306747 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:23:53.085921  306747 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:23:53Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:23:53.085995  306747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:23:53.099392  306747 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:23:53.099418  306747 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:23:53.099471  306747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:23:53.118282  306747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:23:53.118709  306747 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-254035" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:53.118820  306747 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-254035" cluster setting kubeconfig missing "ha-254035" context setting]
	I1017 19:23:53.119084  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.119598  306747 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:23:53.120104  306747 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 19:23:53.120124  306747 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 19:23:53.120130  306747 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 19:23:53.120135  306747 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 19:23:53.120142  306747 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 19:23:53.120434  306747 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 19:23:53.120753  306747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:23:53.137306  306747 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 19:23:53.137333  306747 kubeadm.go:601] duration metric: took 37.90723ms to restartPrimaryControlPlane
	I1017 19:23:53.137344  306747 kubeadm.go:402] duration metric: took 113.964982ms to StartCluster
	I1017 19:23:53.137360  306747 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.137421  306747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:53.137983  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.138193  306747 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:23:53.138219  306747 start.go:241] waiting for startup goroutines ...
	I1017 19:23:53.138228  306747 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:23:53.138643  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:53.142436  306747 out.go:179] * Enabled addons: 
	I1017 19:23:53.145409  306747 addons.go:514] duration metric: took 7.175068ms for enable addons: enabled=[]
	I1017 19:23:53.145452  306747 start.go:246] waiting for cluster config update ...
	I1017 19:23:53.145461  306747 start.go:255] writing updated cluster config ...
	I1017 19:23:53.148803  306747 out.go:203] 
	I1017 19:23:53.151893  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:53.152042  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.155214  306747 out.go:179] * Starting "ha-254035-m02" control-plane node in "ha-254035" cluster
	I1017 19:23:53.158764  306747 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:23:53.161709  306747 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:23:53.164610  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:53.164638  306747 cache.go:58] Caching tarball of preloaded images
	I1017 19:23:53.164743  306747 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:23:53.164758  306747 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:23:53.164894  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.165099  306747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:23:53.194887  306747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:23:53.194907  306747 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:23:53.194919  306747 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:23:53.194954  306747 start.go:360] acquireMachinesLock for ha-254035-m02: {Name:mkcf59557cfb2c18712510006a9b88f53e9d8916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:23:53.195003  306747 start.go:364] duration metric: took 34.034µs to acquireMachinesLock for "ha-254035-m02"
	I1017 19:23:53.195021  306747 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:23:53.195027  306747 fix.go:54] fixHost starting: m02
	I1017 19:23:53.195286  306747 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:23:53.230172  306747 fix.go:112] recreateIfNeeded on ha-254035-m02: state=Stopped err=<nil>
	W1017 19:23:53.230198  306747 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:23:53.233425  306747 out.go:252] * Restarting existing docker container for "ha-254035-m02" ...
	I1017 19:23:53.233506  306747 cli_runner.go:164] Run: docker start ha-254035-m02
	I1017 19:23:53.677194  306747 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:23:53.705353  306747 kic.go:430] container "ha-254035-m02" state is running.
	I1017 19:23:53.705741  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:23:53.741365  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.741612  306747 machine.go:93] provisionDockerMachine start ...
	I1017 19:23:53.741677  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:53.774362  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:53.774683  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:53.774700  306747 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:23:53.776617  306747 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32782->127.0.0.1:33179: read: connection reset by peer
	I1017 19:23:57.101345  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:23:57.101367  306747 ubuntu.go:182] provisioning hostname "ha-254035-m02"
	I1017 19:23:57.101452  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:57.129925  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:57.130248  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:57.130260  306747 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m02 && echo "ha-254035-m02" | sudo tee /etc/hostname
	I1017 19:23:57.485252  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:23:57.485332  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:57.518218  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:57.518523  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:57.518547  306747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:23:57.769807  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:23:57.769837  306747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:23:57.769852  306747 ubuntu.go:190] setting up certificates
	I1017 19:23:57.769861  306747 provision.go:84] configureAuth start
	I1017 19:23:57.769925  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:23:57.808507  306747 provision.go:143] copyHostCerts
	I1017 19:23:57.808576  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:57.808611  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:23:57.808621  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:57.808702  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:23:57.808777  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:57.808795  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:23:57.808799  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:57.808824  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:23:57.808885  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:57.808900  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:23:57.808904  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:57.808927  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:23:57.808973  306747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m02 san=[127.0.0.1 192.168.49.3 ha-254035-m02 localhost minikube]
	I1017 19:23:58.970392  306747 provision.go:177] copyRemoteCerts
	I1017 19:23:58.970466  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:23:58.970517  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:58.988411  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:23:59.109264  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:23:59.109327  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:23:59.143927  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:23:59.144007  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:23:59.175735  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:23:59.175798  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:23:59.207513  306747 provision.go:87] duration metric: took 1.437637997s to configureAuth
	I1017 19:23:59.207541  306747 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:23:59.207787  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:59.207891  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.254211  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:59.254534  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:59.254554  306747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:23:59.802396  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:23:59.802506  306747 machine.go:96] duration metric: took 6.06086173s to provisionDockerMachine
	I1017 19:23:59.802537  306747 start.go:293] postStartSetup for "ha-254035-m02" (driver="docker")
	I1017 19:23:59.802584  306747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:23:59.802692  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:23:59.802768  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.826274  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:23:59.933472  306747 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:23:59.937860  306747 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:23:59.937890  306747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:23:59.937902  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:23:59.937957  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:23:59.938045  306747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:23:59.938058  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:23:59.938173  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:23:59.946632  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:59.974586  306747 start.go:296] duration metric: took 172.005858ms for postStartSetup
	I1017 19:23:59.974693  306747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:23:59.974736  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.998482  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:00.178671  306747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:24:00.215855  306747 fix.go:56] duration metric: took 7.020817171s for fixHost
	I1017 19:24:00.215889  306747 start.go:83] releasing machines lock for "ha-254035-m02", held for 7.020877911s
	I1017 19:24:00.215976  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:24:00.366887  306747 out.go:179] * Found network options:
	I1017 19:24:00.370345  306747 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 19:24:00.373400  306747 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:24:00.373520  306747 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:24:00.373638  306747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:24:00.373712  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:24:00.373921  306747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:24:00.373955  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:24:00.473797  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:00.502501  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:01.163570  306747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:24:01.201188  306747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:24:01.201285  306747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:24:01.221545  306747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:24:01.221578  306747 start.go:495] detecting cgroup driver to use...
	I1017 19:24:01.221624  306747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:24:01.221679  306747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:24:01.249432  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:24:01.274115  306747 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:24:01.274197  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:24:01.300156  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:24:01.327634  306747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:24:01.676293  306747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:24:01.963473  306747 docker.go:234] disabling docker service ...
	I1017 19:24:01.963548  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:24:01.985469  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:24:02.006761  306747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:24:02.326335  306747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:24:02.689696  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:24:02.707153  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:24:02.733380  306747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:24:02.733503  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.745270  306747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:24:02.745354  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.761212  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.777017  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.786654  306747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:24:02.797775  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.809053  306747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.819042  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.830450  306747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:24:02.839137  306747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:24:02.853061  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:24:03.081615  306747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:25:33.444575  306747 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.36287356s)
	I1017 19:25:33.444601  306747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:25:33.444663  306747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:25:33.448790  306747 start.go:563] Will wait 60s for crictl version
	I1017 19:25:33.448855  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:25:33.452484  306747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:25:33.483181  306747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:25:33.483261  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:25:33.520275  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:25:33.555708  306747 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:25:33.558710  306747 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:25:33.561569  306747 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:25:33.577269  306747 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:25:33.581166  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:25:33.590512  306747 mustload.go:65] Loading cluster: ha-254035
	I1017 19:25:33.590749  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:25:33.591003  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:25:33.607631  306747 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:25:33.607910  306747 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.3
	I1017 19:25:33.607918  306747 certs.go:195] generating shared ca certs ...
	I1017 19:25:33.607932  306747 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:25:33.608031  306747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:25:33.608069  306747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:25:33.608076  306747 certs.go:257] generating profile certs ...
	I1017 19:25:33.608151  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:25:33.608210  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.5a836dc6
	I1017 19:25:33.608248  306747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:25:33.608256  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:25:33.608268  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:25:33.608278  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:25:33.608288  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:25:33.608298  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:25:33.608314  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:25:33.608325  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:25:33.608334  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:25:33.608382  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:25:33.608409  306747 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:25:33.608418  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:25:33.608439  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:25:33.608460  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:25:33.608482  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:25:33.608557  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:25:33.608586  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:25:33.608606  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:33.608635  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:25:33.608691  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:25:33.626221  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:25:33.720799  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:25:33.724641  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:25:33.732808  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:25:33.736200  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:25:33.744126  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:25:33.747465  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:25:33.755494  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:25:33.759075  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:25:33.767011  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:25:33.770516  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:25:33.778582  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:25:33.781925  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:25:33.789662  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:25:33.814144  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:25:33.834289  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:25:33.855264  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:25:33.875243  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:25:33.892238  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:25:33.909902  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:25:33.927819  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:25:33.945089  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:25:33.970864  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:25:33.990984  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:25:34.011449  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:25:34.027436  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:25:34.042890  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:25:34.058368  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:25:34.072057  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:25:34.088147  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:25:34.104554  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:25:34.119006  306747 ssh_runner.go:195] Run: openssl version
	I1017 19:25:34.125500  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:25:34.134066  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.138184  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.138272  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.179366  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:25:34.187225  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:25:34.195194  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.198812  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.198884  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.240748  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:25:34.248576  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:25:34.256442  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.260252  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.260343  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.301741  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:25:34.309494  306747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:25:34.313266  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:25:34.354021  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:25:34.403496  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:25:34.452995  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:25:34.501920  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:25:34.553096  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:25:34.605637  306747 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 19:25:34.605735  306747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:25:34.605768  306747 kube-vip.go:115] generating kube-vip config ...
	I1017 19:25:34.605818  306747 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:25:34.618260  306747 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:25:34.618384  306747 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:25:34.618473  306747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:25:34.626096  306747 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:25:34.626222  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:25:34.634241  306747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:25:34.648042  306747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:25:34.661462  306747 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:25:34.676617  306747 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:25:34.680227  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:25:34.690889  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:25:34.816737  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:25:34.831088  306747 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:25:34.831560  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:25:34.834934  306747 out.go:179] * Verifying Kubernetes components...
	I1017 19:25:34.837819  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:25:34.968993  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:25:34.983274  306747 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:25:34.983348  306747 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:25:34.983632  306747 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:25:40.996755  306747 node_ready.go:49] node "ha-254035-m02" is "Ready"
	I1017 19:25:40.996789  306747 node_ready.go:38] duration metric: took 6.013138239s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:25:40.996811  306747 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:25:40.996889  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:41.497684  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:41.997836  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:42.497138  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:42.997736  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:43.497602  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:43.997356  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:44.497754  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:44.997290  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:45.497281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:45.997333  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:46.497704  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:46.997128  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:47.497723  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:47.997671  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:48.497561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:48.997733  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:49.497782  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:49.997750  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:50.497774  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:50.997177  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:51.497562  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:51.997821  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:52.497764  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:52.997863  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:53.497099  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:53.997052  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:54.497663  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:54.997664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:55.497701  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:55.997019  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:56.497726  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:56.997168  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:57.497752  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:57.997835  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:58.497010  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:58.997743  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:59.497316  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:59.997012  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:00.497061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:00.997884  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:01.497722  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:01.997039  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:02.497739  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:02.997315  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:03.497590  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:03.997754  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:04.497035  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:04.997744  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:05.497624  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:05.997419  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:06.497061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:06.997596  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:07.497373  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:07.997733  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:08.497364  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:08.997732  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:09.497421  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:09.997728  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:10.497717  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:10.996987  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:11.497090  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:11.996943  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:12.497429  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:12.997010  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:13.496953  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:13.997093  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:14.497074  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:14.997281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:15.497737  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:15.997688  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:16.497625  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:16.997704  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:17.497320  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:17.996949  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:18.497953  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:18.997042  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:19.497090  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:19.997041  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:20.497518  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:20.997019  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:21.497012  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:21.996982  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:22.497045  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:22.997657  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:23.497467  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:23.997803  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:24.497044  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:24.997325  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:25.497747  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:25.997044  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:26.497026  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:26.997552  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:27.497036  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:27.997604  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:28.497701  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:28.997373  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:29.497563  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:29.997697  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:30.497017  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:30.997407  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:31.497716  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:31.997874  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:32.497096  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:32.997561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:33.497057  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:33.997665  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:34.497043  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:34.997691  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:34.997800  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:35.032363  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:35.032386  306747 cri.go:89] found id: ""
	I1017 19:26:35.032399  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:35.032460  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.036381  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:35.036459  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:35.065338  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:35.065359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:35.065364  306747 cri.go:89] found id: ""
	I1017 19:26:35.065371  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:35.065425  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.069065  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.072703  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:35.072774  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:35.103898  306747 cri.go:89] found id: ""
	I1017 19:26:35.103925  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.103934  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:35.103941  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:35.104009  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:35.133147  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:35.133171  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:35.133176  306747 cri.go:89] found id: ""
	I1017 19:26:35.133189  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:35.133243  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.137074  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.140598  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:35.140672  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:35.172805  306747 cri.go:89] found id: ""
	I1017 19:26:35.172831  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.172840  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:35.172847  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:35.172921  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:35.200314  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:35.200339  306747 cri.go:89] found id: ""
	I1017 19:26:35.200347  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:35.200399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.204068  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:35.204142  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:35.229333  306747 cri.go:89] found id: ""
	I1017 19:26:35.229355  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.229364  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:35.229373  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:35.229386  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:35.270788  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:35.270824  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:35.327408  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:35.327441  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:35.407924  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:35.407963  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:35.511553  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:35.511590  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:35.532712  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:35.532742  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:35.560601  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:35.560631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:35.605951  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:35.605984  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:35.637220  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:35.637251  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:35.667818  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:35.667848  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:35.697952  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:35.697980  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:36.107033  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:36.098521    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.099526    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.100351    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.101907    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.102306    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:36.098521    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.099526    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.100351    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.101907    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.102306    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:38.608691  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:38.620441  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:38.620597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:38.653949  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:38.653982  306747 cri.go:89] found id: ""
	I1017 19:26:38.653991  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:38.654045  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.657661  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:38.657779  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:38.682961  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:38.682992  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:38.682998  306747 cri.go:89] found id: ""
	I1017 19:26:38.683005  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:38.683057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.686897  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.690246  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:38.690316  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:38.727058  306747 cri.go:89] found id: ""
	I1017 19:26:38.727088  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.727096  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:38.727102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:38.727159  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:38.751866  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:38.751891  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:38.751895  306747 cri.go:89] found id: ""
	I1017 19:26:38.751902  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:38.751960  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.755561  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.758764  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:38.758835  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:38.791573  306747 cri.go:89] found id: ""
	I1017 19:26:38.791597  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.791607  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:38.791613  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:38.791672  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:38.818970  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:38.818993  306747 cri.go:89] found id: ""
	I1017 19:26:38.819002  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:38.819054  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.822644  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:38.822766  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:38.849350  306747 cri.go:89] found id: ""
	I1017 19:26:38.849373  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.849381  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:38.849390  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:38.849436  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:38.883482  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:38.883512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:38.978629  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:38.978664  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:39.055121  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:39.045881    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.046283    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.047962    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.048507    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.050096    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:39.045881    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.046283    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.047962    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.048507    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.050096    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:39.055145  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:39.055158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:39.081488  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:39.081516  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:39.123529  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:39.123560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:39.152993  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:39.153024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:39.181581  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:39.181608  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:39.199086  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:39.199116  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:39.231605  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:39.231638  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:39.287509  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:39.287544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:41.868969  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:41.879522  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:41.879591  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:41.906366  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:41.906388  306747 cri.go:89] found id: ""
	I1017 19:26:41.906397  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:41.906450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.909979  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:41.910090  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:41.940072  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:41.940101  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:41.940105  306747 cri.go:89] found id: ""
	I1017 19:26:41.940113  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:41.940173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.945194  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.948667  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:41.948784  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:41.979374  306747 cri.go:89] found id: ""
	I1017 19:26:41.979410  306747 logs.go:282] 0 containers: []
	W1017 19:26:41.979419  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:41.979425  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:41.979492  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:42.008367  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:42.008445  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:42.008465  306747 cri.go:89] found id: ""
	I1017 19:26:42.008493  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:42.008628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.016467  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.031735  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:42.031876  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:42.079629  306747 cri.go:89] found id: ""
	I1017 19:26:42.079665  306747 logs.go:282] 0 containers: []
	W1017 19:26:42.079676  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:42.079684  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:42.079750  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:42.122316  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:42.122342  306747 cri.go:89] found id: ""
	I1017 19:26:42.122351  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:42.122423  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.131137  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:42.131241  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:42.200222  306747 cri.go:89] found id: ""
	I1017 19:26:42.200249  306747 logs.go:282] 0 containers: []
	W1017 19:26:42.200259  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:42.200270  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:42.200283  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:42.314817  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:42.314908  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:42.375712  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:42.375762  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:42.431602  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:42.431639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:42.465004  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:42.465097  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:42.491256  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:42.491284  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:42.567094  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:42.558455    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.559104    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.560757    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.561472    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.563142    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:42.558455    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.559104    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.560757    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.561472    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.563142    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:42.567120  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:42.567134  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:42.597513  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:42.597543  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:42.632231  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:42.632268  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:42.659445  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:42.659478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:42.686189  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:42.686217  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:45.285116  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:45.308457  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:45.308578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:45.374050  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:45.374075  306747 cri.go:89] found id: ""
	I1017 19:26:45.374083  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:45.374152  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.386847  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:45.387031  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:45.432081  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:45.432105  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:45.432111  306747 cri.go:89] found id: ""
	I1017 19:26:45.432129  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:45.432185  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.436568  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.443473  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:45.443575  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:45.473992  306747 cri.go:89] found id: ""
	I1017 19:26:45.474066  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.474095  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:45.474124  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:45.474279  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:45.508735  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:45.508808  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:45.508820  306747 cri.go:89] found id: ""
	I1017 19:26:45.508829  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:45.508889  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.513024  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.517047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:45.517124  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:45.544672  306747 cri.go:89] found id: ""
	I1017 19:26:45.544698  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.544707  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:45.544714  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:45.544814  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:45.577228  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:45.577250  306747 cri.go:89] found id: ""
	I1017 19:26:45.577257  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:45.577316  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.581280  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:45.581379  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:45.608143  306747 cri.go:89] found id: ""
	I1017 19:26:45.608166  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.608174  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:45.608183  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:45.608226  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:45.627200  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:45.627230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:45.699692  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:45.692149    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.692814    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694339    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694730    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.696164    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:45.692149    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.692814    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694339    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694730    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.696164    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:45.699717  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:45.699732  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:45.725239  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:45.725269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:45.766316  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:45.766359  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:45.831866  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:45.831908  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:45.869708  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:45.869736  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:45.910170  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:45.910198  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:46.010455  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:46.010498  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:46.047523  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:46.047559  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:46.076222  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:46.076306  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:48.663425  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:48.673865  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:48.673931  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:48.699244  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:48.699267  306747 cri.go:89] found id: ""
	I1017 19:26:48.699275  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:48.699330  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.702918  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:48.702988  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:48.729193  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:48.729268  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:48.729288  306747 cri.go:89] found id: ""
	I1017 19:26:48.729311  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:48.729390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.732927  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.736821  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:48.736893  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:48.763745  306747 cri.go:89] found id: ""
	I1017 19:26:48.763770  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.763780  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:48.763786  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:48.763842  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:48.790384  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:48.790407  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:48.790413  306747 cri.go:89] found id: ""
	I1017 19:26:48.790420  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:48.790496  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.796703  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.800342  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:48.800409  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:48.825802  306747 cri.go:89] found id: ""
	I1017 19:26:48.825830  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.825839  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:48.825846  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:48.825904  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:48.863208  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:48.863231  306747 cri.go:89] found id: ""
	I1017 19:26:48.863239  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:48.863294  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.866822  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:48.866902  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:48.896937  306747 cri.go:89] found id: ""
	I1017 19:26:48.897017  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.897039  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:48.897080  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:48.897109  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:48.999995  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:49.000071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:49.019541  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:49.019629  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:49.045737  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:49.045806  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:49.106443  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:49.106478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:49.135555  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:49.135583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:49.162643  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:49.162670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:49.240999  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:49.241038  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:49.311820  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:49.304505    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.305101    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.306817    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.307292    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.308350    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:49.304505    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.305101    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.306817    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.307292    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.308350    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:49.311849  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:49.311861  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:49.347575  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:49.347614  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:49.399291  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:49.399328  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:51.931612  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:51.944600  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:51.944667  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:51.977717  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:51.977741  306747 cri.go:89] found id: ""
	I1017 19:26:51.977750  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:51.977808  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:51.981757  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:51.981877  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:52.013943  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:52.013965  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:52.013971  306747 cri.go:89] found id: ""
	I1017 19:26:52.013979  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:52.014034  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.017876  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.021450  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:52.021529  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:52.054762  306747 cri.go:89] found id: ""
	I1017 19:26:52.054788  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.054797  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:52.054804  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:52.054873  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:52.094469  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:52.094492  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:52.094498  306747 cri.go:89] found id: ""
	I1017 19:26:52.094506  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:52.094561  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.099707  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.103487  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:52.103557  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:52.137366  306747 cri.go:89] found id: ""
	I1017 19:26:52.137393  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.137403  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:52.137410  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:52.137494  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:52.164118  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:52.164142  306747 cri.go:89] found id: ""
	I1017 19:26:52.164151  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:52.164235  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.167871  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:52.167951  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:52.195587  306747 cri.go:89] found id: ""
	I1017 19:26:52.195667  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.195691  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:52.195730  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:52.195759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:52.214865  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:52.214895  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:52.252677  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:52.252718  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:52.306241  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:52.306281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:52.362956  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:52.362991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:52.391628  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:52.391659  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:52.471864  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:52.463115    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464242    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464958    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.465978    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.466515    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:52.463115    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464242    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464958    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.465978    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.466515    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:52.471900  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:52.471915  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:52.518448  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:52.518483  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:52.552877  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:52.552904  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:52.635208  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:52.635241  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:52.671244  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:52.671274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:55.270940  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:55.282002  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:55.282081  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:55.307829  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:55.307853  306747 cri.go:89] found id: ""
	I1017 19:26:55.307862  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:55.307917  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.311717  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:55.311788  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:55.337747  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:55.337770  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:55.337775  306747 cri.go:89] found id: ""
	I1017 19:26:55.337783  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:55.337840  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.341583  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.345443  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:55.345519  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:55.374240  306747 cri.go:89] found id: ""
	I1017 19:26:55.374268  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.374277  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:55.374283  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:55.374348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:55.400969  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:55.400994  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:55.400999  306747 cri.go:89] found id: ""
	I1017 19:26:55.401007  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:55.401074  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.405683  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.409216  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:55.409288  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:55.436866  306747 cri.go:89] found id: ""
	I1017 19:26:55.436897  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.436907  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:55.436913  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:55.436972  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:55.469071  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:55.469094  306747 cri.go:89] found id: ""
	I1017 19:26:55.469103  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:55.469160  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.472979  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:55.473075  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:55.504006  306747 cri.go:89] found id: ""
	I1017 19:26:55.504033  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.504043  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:55.504052  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:55.504064  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:55.530026  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:55.530065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:55.566251  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:55.566281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:55.619544  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:55.619580  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:55.647120  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:55.647155  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:55.674483  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:55.674552  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:55.771290  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:55.771328  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:55.791108  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:55.791139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:55.877444  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:55.868298    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.869608    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.870496    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.871568    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.873502    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:55.868298    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.869608    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.870496    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.871568    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.873502    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:55.877467  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:55.877481  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:55.942292  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:55.942327  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:56.029233  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:56.029279  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:58.564639  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:58.575251  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:58.575327  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:58.603745  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:58.603769  306747 cri.go:89] found id: ""
	I1017 19:26:58.603778  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:58.603841  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.607600  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:58.607673  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:58.635364  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:58.635387  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:58.635393  306747 cri.go:89] found id: ""
	I1017 19:26:58.635401  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:58.635459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.639164  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.642599  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:58.642665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:58.671065  306747 cri.go:89] found id: ""
	I1017 19:26:58.671089  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.671098  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:58.671105  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:58.671161  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:58.697581  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:58.697606  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:58.697613  306747 cri.go:89] found id: ""
	I1017 19:26:58.697621  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:58.697701  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.701636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.705721  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:58.705790  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:58.739521  306747 cri.go:89] found id: ""
	I1017 19:26:58.739548  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.739557  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:58.739563  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:58.739618  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:58.766994  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:58.767022  306747 cri.go:89] found id: ""
	I1017 19:26:58.767030  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:58.767085  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.771181  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:58.771253  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:58.798835  306747 cri.go:89] found id: ""
	I1017 19:26:58.798862  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.798871  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:58.798880  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:58.798891  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:58.841984  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:58.842010  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:58.866669  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:58.866697  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:58.916756  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:58.916789  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:58.980015  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:58.980050  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:59.009380  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:59.009409  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:59.109257  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:59.109295  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:59.177549  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:59.168803    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.169600    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.171537    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.172076    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.173678    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:59.168803    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.169600    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.171537    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.172076    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.173678    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:59.177581  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:59.177599  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:59.206699  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:59.206727  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:59.242107  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:59.242142  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:59.275450  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:59.275479  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:01.857354  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:01.869639  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:01.869705  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:01.902744  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:01.902764  306747 cri.go:89] found id: ""
	I1017 19:27:01.902772  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:01.902838  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.906810  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:01.906935  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:01.934659  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:01.934722  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:01.934742  306747 cri.go:89] found id: ""
	I1017 19:27:01.934766  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:01.934853  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.938762  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.946146  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:01.946267  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:01.980395  306747 cri.go:89] found id: ""
	I1017 19:27:01.980461  306747 logs.go:282] 0 containers: []
	W1017 19:27:01.980482  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:01.980505  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:01.980614  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:02.015273  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:02.015298  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:02.015303  306747 cri.go:89] found id: ""
	I1017 19:27:02.015320  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:02.015383  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.019407  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.023456  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:02.023534  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:02.051152  306747 cri.go:89] found id: ""
	I1017 19:27:02.051182  306747 logs.go:282] 0 containers: []
	W1017 19:27:02.051192  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:02.051198  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:02.051258  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:02.080723  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:02.080745  306747 cri.go:89] found id: ""
	I1017 19:27:02.080753  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:02.080813  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.084603  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:02.084678  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:02.120072  306747 cri.go:89] found id: ""
	I1017 19:27:02.120146  306747 logs.go:282] 0 containers: []
	W1017 19:27:02.120170  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:02.120195  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:02.120230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:02.139600  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:02.139631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:02.185131  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:02.185166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:02.229909  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:02.229940  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:02.260111  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:02.260140  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:02.288588  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:02.288618  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:02.370459  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:02.370495  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:02.476572  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:02.476608  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:02.551905  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:02.543576    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.544579    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546057    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546535    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.548140    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:02.543576    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.544579    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546057    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546535    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.548140    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:02.551926  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:02.551940  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:02.578293  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:02.578321  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:02.633456  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:02.633493  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:05.164689  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:05.177240  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:05.177315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:05.205506  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:05.205530  306747 cri.go:89] found id: ""
	I1017 19:27:05.205540  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:05.205597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.209410  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:05.209492  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:05.236360  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:05.236383  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:05.236388  306747 cri.go:89] found id: ""
	I1017 19:27:05.236396  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:05.236448  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.240255  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.243840  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:05.243907  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:05.279749  306747 cri.go:89] found id: ""
	I1017 19:27:05.279788  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.279798  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:05.279804  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:05.279860  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:05.307767  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:05.307790  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:05.307796  306747 cri.go:89] found id: ""
	I1017 19:27:05.307803  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:05.307857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.311429  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.314827  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:05.314906  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:05.340148  306747 cri.go:89] found id: ""
	I1017 19:27:05.340175  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.340184  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:05.340190  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:05.340246  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:05.366040  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:05.366063  306747 cri.go:89] found id: ""
	I1017 19:27:05.366071  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:05.366145  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.369954  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:05.370054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:05.396415  306747 cri.go:89] found id: ""
	I1017 19:27:05.396439  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.396448  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:05.396457  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:05.396468  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:05.491768  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:05.491804  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:05.510133  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:05.510179  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:05.588291  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:05.580157    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.580846    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.582570    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.583481    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.584634    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:05.580157    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.580846    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.582570    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.583481    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.584634    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:05.588313  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:05.588326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:05.616894  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:05.616921  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:05.660215  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:05.660252  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:05.715621  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:05.715657  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:05.744211  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:05.744240  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:05.777510  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:05.777544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:05.808038  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:05.808066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:05.885964  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:05.886000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:08.420171  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:08.431142  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:08.431221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:08.457528  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:08.457552  306747 cri.go:89] found id: ""
	I1017 19:27:08.457561  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:08.457616  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.461556  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:08.461665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:08.492016  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:08.492039  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:08.492044  306747 cri.go:89] found id: ""
	I1017 19:27:08.492052  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:08.492103  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.495761  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.500185  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:08.500282  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:08.526916  306747 cri.go:89] found id: ""
	I1017 19:27:08.526941  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.526950  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:08.526957  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:08.527014  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:08.556113  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:08.556134  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:08.556140  306747 cri.go:89] found id: ""
	I1017 19:27:08.556147  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:08.556214  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.560101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.564014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:08.564084  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:08.594033  306747 cri.go:89] found id: ""
	I1017 19:27:08.594056  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.594071  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:08.594079  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:08.594135  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:08.620047  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:08.620113  306747 cri.go:89] found id: ""
	I1017 19:27:08.620142  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:08.620221  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.624310  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:08.624418  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:08.649502  306747 cri.go:89] found id: ""
	I1017 19:27:08.649567  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.649595  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:08.649623  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:08.649648  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:08.743803  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:08.743839  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:08.769242  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:08.769268  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:08.799565  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:08.799593  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:08.828556  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:08.828635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:08.846407  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:08.846438  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:08.930960  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:08.922375    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.923180    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925039    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925592    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.927335    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:08.922375    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.923180    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925039    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925592    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.927335    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:08.930984  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:08.930996  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:08.989884  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:08.989918  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:09.029740  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:09.029776  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:09.088750  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:09.088784  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:09.174757  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:09.174791  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:11.706527  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:11.717507  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:11.717580  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:11.742517  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:11.742540  306747 cri.go:89] found id: ""
	I1017 19:27:11.742548  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:11.742628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.746473  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:11.746545  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:11.778260  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:11.778322  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:11.778341  306747 cri.go:89] found id: ""
	I1017 19:27:11.778364  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:11.778435  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.782026  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.785484  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:11.785543  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:11.816069  306747 cri.go:89] found id: ""
	I1017 19:27:11.816094  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.816103  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:11.816109  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:11.816175  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:11.841738  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:11.841812  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:11.841832  306747 cri.go:89] found id: ""
	I1017 19:27:11.841848  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:11.841921  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.845737  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.849826  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:11.849962  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:11.877696  306747 cri.go:89] found id: ""
	I1017 19:27:11.877760  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.877783  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:11.877806  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:11.877878  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:11.905454  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:11.905478  306747 cri.go:89] found id: ""
	I1017 19:27:11.905487  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:11.905551  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.909271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:11.909371  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:11.937354  306747 cri.go:89] found id: ""
	I1017 19:27:11.937378  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.937388  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:11.937397  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:11.937408  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:11.964198  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:11.964227  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:12.047655  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:12.047711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:12.152282  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:12.152323  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:12.185576  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:12.185607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:12.216321  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:12.216350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:12.234007  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:12.234037  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:12.302472  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:12.293592    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.294322    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.296814    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.297401    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.299030    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:12.293592    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.294322    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.296814    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.297401    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.299030    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:12.302493  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:12.302508  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:12.361658  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:12.361692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:12.396422  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:12.396455  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:12.450643  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:12.450679  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:14.981141  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:14.992478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:14.992583  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:15.029616  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:15.029652  306747 cri.go:89] found id: ""
	I1017 19:27:15.029662  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:15.029733  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.034198  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:15.034280  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:15.067180  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:15.067204  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:15.067210  306747 cri.go:89] found id: ""
	I1017 19:27:15.067223  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:15.067278  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.071734  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.075202  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:15.075278  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:15.102244  306747 cri.go:89] found id: ""
	I1017 19:27:15.102269  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.102278  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:15.102285  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:15.102345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:15.130161  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:15.130189  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:15.130195  306747 cri.go:89] found id: ""
	I1017 19:27:15.130203  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:15.130258  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.134790  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.138971  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:15.139069  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:15.173861  306747 cri.go:89] found id: ""
	I1017 19:27:15.173886  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.173896  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:15.173903  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:15.173964  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:15.202641  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:15.202665  306747 cri.go:89] found id: ""
	I1017 19:27:15.202674  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:15.202732  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.206633  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:15.206702  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:15.234246  306747 cri.go:89] found id: ""
	I1017 19:27:15.234273  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.234283  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:15.234294  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:15.234305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:15.315039  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:15.315073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:15.418425  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:15.418463  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:15.436291  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:15.436322  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:15.508060  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:15.500418    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.501026    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502514    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502986    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.504397    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:15.500418    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.501026    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502514    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502986    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.504397    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:15.508127  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:15.508156  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:15.541312  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:15.541345  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:15.597746  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:15.597777  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:15.630514  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:15.630544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:15.662426  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:15.662454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:15.690843  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:15.690870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:15.737261  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:15.737305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.271724  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:18.282865  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:18.282933  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:18.310461  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:18.310530  306747 cri.go:89] found id: ""
	I1017 19:27:18.310545  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:18.310598  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.314206  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:18.314277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:18.343711  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:18.343736  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:18.343741  306747 cri.go:89] found id: ""
	I1017 19:27:18.343750  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:18.343827  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.347663  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.351287  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:18.351359  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:18.378302  306747 cri.go:89] found id: ""
	I1017 19:27:18.378329  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.378350  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:18.378356  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:18.378434  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:18.405852  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:18.405876  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.405881  306747 cri.go:89] found id: ""
	I1017 19:27:18.405889  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:18.405977  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.409609  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.413366  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:18.413434  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:18.438274  306747 cri.go:89] found id: ""
	I1017 19:27:18.438308  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.438332  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:18.438348  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:18.438428  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:18.465310  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:18.465379  306747 cri.go:89] found id: ""
	I1017 19:27:18.465394  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:18.465449  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.469114  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:18.469267  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:18.495209  306747 cri.go:89] found id: ""
	I1017 19:27:18.495236  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.495245  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:18.495254  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:18.495269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:18.521513  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:18.521541  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:18.551762  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:18.551788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:18.647502  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:18.647539  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:18.665784  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:18.665815  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:18.718577  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:18.718624  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:18.777594  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:18.777628  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.807963  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:18.807989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:18.892875  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:18.892910  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:18.960765  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:18.951643    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.952944    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.953536    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955189    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955840    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:18.951643    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.952944    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.953536    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955189    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955840    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:18.960787  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:18.960801  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:18.988908  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:18.988936  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:21.525356  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:21.536317  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:21.536383  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:21.562005  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:21.562074  306747 cri.go:89] found id: ""
	I1017 19:27:21.562089  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:21.562148  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.565814  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:21.565899  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:21.593641  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:21.593662  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:21.593668  306747 cri.go:89] found id: ""
	I1017 19:27:21.593675  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:21.593728  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.597715  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.601210  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:21.601286  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:21.626313  306747 cri.go:89] found id: ""
	I1017 19:27:21.626339  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.626349  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:21.626355  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:21.626413  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:21.658772  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:21.658794  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:21.658800  306747 cri.go:89] found id: ""
	I1017 19:27:21.658807  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:21.658866  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.662812  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.666487  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:21.666561  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:21.698844  306747 cri.go:89] found id: ""
	I1017 19:27:21.698905  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.698927  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:21.698951  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:21.699030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:21.728779  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:21.728838  306747 cri.go:89] found id: ""
	I1017 19:27:21.728865  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:21.728939  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.732581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:21.732691  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:21.758611  306747 cri.go:89] found id: ""
	I1017 19:27:21.758636  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.758645  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:21.758655  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:21.758685  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:21.853910  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:21.853951  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:21.929259  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:21.920729    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.921839    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923480    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923794    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.925410    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:21.920729    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.921839    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923480    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923794    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.925410    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:21.929281  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:21.929294  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:21.969445  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:21.969472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:22.060427  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:22.060560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:22.126121  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:22.126202  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:22.196425  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:22.196503  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:22.261955  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:22.262043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:22.285064  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:22.285159  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:22.339749  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:22.339827  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:22.385350  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:22.385427  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:24.966467  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:24.992294  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:24.992366  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:25.035727  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:25.035754  306747 cri.go:89] found id: ""
	I1017 19:27:25.035762  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:25.035847  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.040229  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:25.040304  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:25.088117  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:25.088145  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:25.088152  306747 cri.go:89] found id: ""
	I1017 19:27:25.088159  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:25.088215  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.092329  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.099299  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:25.099383  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:25.150822  306747 cri.go:89] found id: ""
	I1017 19:27:25.150858  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.150868  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:25.150878  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:25.150945  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:25.211825  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:25.211850  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:25.211855  306747 cri.go:89] found id: ""
	I1017 19:27:25.211863  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:25.211927  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.217398  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.221047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:25.221126  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:25.258850  306747 cri.go:89] found id: ""
	I1017 19:27:25.258885  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.258895  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:25.258904  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:25.258968  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:25.295477  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:25.295500  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:25.295512  306747 cri.go:89] found id: ""
	I1017 19:27:25.295520  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:25.295576  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.301386  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.305803  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:25.305873  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:25.334929  306747 cri.go:89] found id: ""
	I1017 19:27:25.334954  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.334970  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:25.334986  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:25.335006  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:25.365373  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:25.365402  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:25.382590  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:25.382626  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:25.432469  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:25.432570  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:25.478525  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:25.478601  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:25.551480  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:25.551560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:25.583783  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:25.583858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:25.679255  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:25.679301  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:25.739090  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:25.739118  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:25.854982  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:25.855021  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:25.955288  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:25.946765    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.947610    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949285    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949589    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.951072    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:25.946765    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.947610    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949285    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949589    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.951072    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:25.955307  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:25.955319  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:26.000458  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:26.000579  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:28.530525  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:28.542430  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:28.542500  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:28.570373  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:28.570394  306747 cri.go:89] found id: ""
	I1017 19:27:28.570402  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:28.570454  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.575832  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:28.575903  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:28.604287  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:28.604307  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:28.604313  306747 cri.go:89] found id: ""
	I1017 19:27:28.604320  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:28.604374  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.608248  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.612312  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:28.612380  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:28.638709  306747 cri.go:89] found id: ""
	I1017 19:27:28.638735  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.638743  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:28.638750  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:28.638807  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:28.665927  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:28.665951  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:28.665957  306747 cri.go:89] found id: ""
	I1017 19:27:28.665964  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:28.666022  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.669671  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.673220  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:28.673317  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:28.703161  306747 cri.go:89] found id: ""
	I1017 19:27:28.703188  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.703197  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:28.703204  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:28.703264  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:28.733314  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:28.733379  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:28.733389  306747 cri.go:89] found id: ""
	I1017 19:27:28.733397  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:28.733460  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.736998  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.740330  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:28.740444  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:28.765130  306747 cri.go:89] found id: ""
	I1017 19:27:28.765156  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.765165  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:28.765174  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:28.765216  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:28.834887  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:28.826610    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.827402    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829127    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829428    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.830934    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:28.826610    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.827402    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829127    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829428    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.830934    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:28.834910  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:28.834923  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:28.870142  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:28.870187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:28.912354  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:28.912388  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:28.968695  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:28.968728  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:29.009047  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:29.009078  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:29.036706  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:29.036734  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:29.120616  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:29.120654  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:29.153285  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:29.153313  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:29.250625  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:29.250664  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:29.271875  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:29.271907  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:29.321668  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:29.321703  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:31.848333  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:31.859324  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:31.859392  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:31.892308  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:31.892331  306747 cri.go:89] found id: ""
	I1017 19:27:31.892347  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:31.892401  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.896342  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:31.896433  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:31.924335  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:31.924359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:31.924364  306747 cri.go:89] found id: ""
	I1017 19:27:31.924371  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:31.924446  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.928119  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.931375  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:31.931444  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:31.961757  306747 cri.go:89] found id: ""
	I1017 19:27:31.961783  306747 logs.go:282] 0 containers: []
	W1017 19:27:31.961792  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:31.961800  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:31.961857  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:31.990900  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:31.990924  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:31.990929  306747 cri.go:89] found id: ""
	I1017 19:27:31.990937  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:31.990997  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.994670  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.998160  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:31.998292  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:32.030448  306747 cri.go:89] found id: ""
	I1017 19:27:32.030523  306747 logs.go:282] 0 containers: []
	W1017 19:27:32.030539  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:32.030548  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:32.030615  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:32.062242  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:32.062267  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:32.062272  306747 cri.go:89] found id: ""
	I1017 19:27:32.062280  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:32.062332  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:32.066062  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:32.069606  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:32.069682  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:32.102492  306747 cri.go:89] found id: ""
	I1017 19:27:32.102534  306747 logs.go:282] 0 containers: []
	W1017 19:27:32.102544  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:32.102553  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:32.102566  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:32.179017  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:32.170484    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.170960    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172496    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172884    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.174718    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:32.170484    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.170960    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172496    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172884    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.174718    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:32.179037  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:32.179050  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:32.225447  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:32.225475  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:32.270526  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:32.270557  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:32.304149  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:32.304181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:32.330757  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:32.330837  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:32.410571  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:32.410610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:32.443417  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:32.443444  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:32.461860  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:32.461890  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:32.510037  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:32.510083  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:32.569278  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:32.569325  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:32.602243  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:32.602269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:35.200643  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:35.211574  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:35.211646  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:35.243134  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:35.243158  306747 cri.go:89] found id: ""
	I1017 19:27:35.243166  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:35.243222  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.247054  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:35.247144  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:35.276216  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:35.276237  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:35.276243  306747 cri.go:89] found id: ""
	I1017 19:27:35.276251  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:35.276304  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.280057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.284007  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:35.284080  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:35.310830  306747 cri.go:89] found id: ""
	I1017 19:27:35.310909  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.310932  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:35.310955  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:35.311062  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:35.354572  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:35.354597  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:35.354602  306747 cri.go:89] found id: ""
	I1017 19:27:35.354610  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:35.354666  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.358450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.361871  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:35.361942  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:35.389041  306747 cri.go:89] found id: ""
	I1017 19:27:35.389065  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.389073  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:35.389079  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:35.389137  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:35.415942  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:35.415967  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:35.415972  306747 cri.go:89] found id: ""
	I1017 19:27:35.415980  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:35.416037  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.419700  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.423643  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:35.423765  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:35.450381  306747 cri.go:89] found id: ""
	I1017 19:27:35.450404  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.450413  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:35.450422  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:35.450435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:35.478252  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:35.478280  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:35.522590  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:35.522623  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:35.578335  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:35.578372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:35.613061  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:35.613091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:35.638492  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:35.638520  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:35.722854  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:35.722891  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:35.757639  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:35.757672  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:35.863697  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:35.863735  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:35.940574  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:35.932704    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.933394    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935016    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935464    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.936965    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:35.932704    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.933394    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935016    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935464    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.936965    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:35.940597  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:35.940610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:35.976992  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:35.977024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:36.004857  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:36.004894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:38.527370  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:38.538426  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:38.538499  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:38.564462  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:38.564484  306747 cri.go:89] found id: ""
	I1017 19:27:38.564504  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:38.564583  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.568393  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:38.568469  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:38.593756  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:38.593785  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:38.593790  306747 cri.go:89] found id: ""
	I1017 19:27:38.593797  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:38.593850  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.597636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.601069  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:38.601138  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:38.628357  306747 cri.go:89] found id: ""
	I1017 19:27:38.628382  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.628391  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:38.628398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:38.628455  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:38.653998  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:38.654020  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:38.654025  306747 cri.go:89] found id: ""
	I1017 19:27:38.654033  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:38.654092  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.658000  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.661429  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:38.661500  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:38.687831  306747 cri.go:89] found id: ""
	I1017 19:27:38.687857  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.687866  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:38.687873  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:38.687939  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:38.728871  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:38.728893  306747 cri.go:89] found id: ""
	I1017 19:27:38.728902  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:38.728956  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.732553  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:38.732626  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:38.758108  306747 cri.go:89] found id: ""
	I1017 19:27:38.758131  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.758139  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:38.758149  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:38.758160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:38.856927  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:38.857005  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:38.875545  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:38.875575  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:38.948879  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:38.941082    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.941735    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943798    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.945334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:38.941082    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.941735    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943798    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.945334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:38.948901  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:38.948914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:38.997335  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:38.997372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:39.029015  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:39.029043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:39.108011  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:39.108046  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:39.141940  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:39.141971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:39.170446  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:39.170472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:39.208445  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:39.208481  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:39.272902  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:39.272952  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:41.807281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:41.817677  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:41.817808  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:41.847030  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:41.847052  306747 cri.go:89] found id: ""
	I1017 19:27:41.847060  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:41.847141  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.856702  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:41.856768  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:41.882291  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:41.882314  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:41.882320  306747 cri.go:89] found id: ""
	I1017 19:27:41.882337  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:41.882441  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.886489  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.896574  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:41.896698  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:41.922724  306747 cri.go:89] found id: ""
	I1017 19:27:41.922748  306747 logs.go:282] 0 containers: []
	W1017 19:27:41.922757  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:41.922763  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:41.922817  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:41.948998  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:41.949024  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:41.949030  306747 cri.go:89] found id: ""
	I1017 19:27:41.949038  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:41.949090  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.961165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.965546  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:41.965617  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:41.994892  306747 cri.go:89] found id: ""
	I1017 19:27:41.994917  306747 logs.go:282] 0 containers: []
	W1017 19:27:41.994935  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:41.994943  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:41.995002  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:42.028588  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:42.028626  306747 cri.go:89] found id: ""
	I1017 19:27:42.028636  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:42.028712  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:42.035671  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:42.035764  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:42.067030  306747 cri.go:89] found id: ""
	I1017 19:27:42.067061  306747 logs.go:282] 0 containers: []
	W1017 19:27:42.067072  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:42.067081  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:42.067105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:42.109133  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:42.109175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:42.199861  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:42.199955  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:42.342289  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:42.342335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:42.363849  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:42.363906  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:42.441824  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:42.432639    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.433836    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.434718    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436054    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436745    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:42.432639    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.433836    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.434718    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436054    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436745    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:42.441858  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:42.441872  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:42.471376  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:42.471404  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:42.516923  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:42.516960  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:42.595252  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:42.595288  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:42.623727  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:42.623757  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:42.665018  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:42.665048  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:45.203111  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:45.228005  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:45.228167  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:45.284064  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:45.284089  306747 cri.go:89] found id: ""
	I1017 19:27:45.284098  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:45.284165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.293975  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:45.294167  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:45.366214  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:45.366372  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:45.366394  306747 cri.go:89] found id: ""
	I1017 19:27:45.366421  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:45.366520  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.385006  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.397052  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:45.397258  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:45.444612  306747 cri.go:89] found id: ""
	I1017 19:27:45.444689  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.444712  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:45.444737  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:45.444839  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:45.475398  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:45.475418  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:45.475422  306747 cri.go:89] found id: ""
	I1017 19:27:45.475430  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:45.475483  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.480459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.484700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:45.484826  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:45.516264  306747 cri.go:89] found id: ""
	I1017 19:27:45.516289  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.516298  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:45.516305  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:45.516385  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:45.545867  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:45.545891  306747 cri.go:89] found id: ""
	I1017 19:27:45.545900  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:45.545955  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.549781  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:45.549898  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:45.578811  306747 cri.go:89] found id: ""
	I1017 19:27:45.578837  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.578847  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:45.578857  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:45.578870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:45.605475  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:45.605507  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:45.687039  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:45.687081  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:45.755076  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:45.746538    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.747381    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749046    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749635    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.751252    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:45.746538    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.747381    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749046    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749635    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.751252    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:45.755099  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:45.755114  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:45.784001  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:45.784034  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:45.837928  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:45.837964  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:45.914633  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:45.914670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:45.950096  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:45.950123  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:46.054149  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:46.054194  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:46.072594  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:46.072628  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:46.111999  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:46.112030  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:48.642924  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:48.653451  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:48.653519  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:48.679639  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:48.679659  306747 cri.go:89] found id: ""
	I1017 19:27:48.679667  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:48.679720  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.683701  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:48.683775  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:48.711679  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:48.711701  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:48.711707  306747 cri.go:89] found id: ""
	I1017 19:27:48.711714  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:48.711767  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.715462  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.718828  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:48.718914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:48.745090  306747 cri.go:89] found id: ""
	I1017 19:27:48.745156  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.745170  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:48.745178  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:48.745236  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:48.772250  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:48.772273  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:48.772278  306747 cri.go:89] found id: ""
	I1017 19:27:48.772286  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:48.772344  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.776030  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.779386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:48.779454  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:48.805859  306747 cri.go:89] found id: ""
	I1017 19:27:48.805884  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.805893  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:48.805900  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:48.805957  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:48.831953  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:48.831975  306747 cri.go:89] found id: ""
	I1017 19:27:48.831984  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:48.832040  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.835702  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:48.835770  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:48.869137  306747 cri.go:89] found id: ""
	I1017 19:27:48.869159  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.869168  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:48.869177  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:48.869190  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:48.910676  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:48.910711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:48.972655  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:48.972690  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:49.013320  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:49.013350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:49.093756  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:49.093796  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:49.137959  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:49.137988  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:49.207174  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:49.198952    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.199631    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201291    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201757    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.203195    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:49.198952    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.199631    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201291    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201757    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.203195    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:49.207199  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:49.207215  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:49.255066  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:49.255135  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:49.283732  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:49.283760  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:49.395846  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:49.395882  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:49.414130  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:49.414161  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:51.941734  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:51.953584  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:51.953657  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:51.984051  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:51.984073  306747 cri.go:89] found id: ""
	I1017 19:27:51.984081  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:51.984225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:51.989195  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:51.989276  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:52.018264  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:52.018291  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:52.018296  306747 cri.go:89] found id: ""
	I1017 19:27:52.018305  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:52.018390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.022319  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.026112  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:52.026196  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:52.054070  306747 cri.go:89] found id: ""
	I1017 19:27:52.054097  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.054107  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:52.054114  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:52.054234  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:52.091016  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:52.091040  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:52.091045  306747 cri.go:89] found id: ""
	I1017 19:27:52.091052  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:52.091109  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.095213  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.098982  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:52.099079  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:52.126556  306747 cri.go:89] found id: ""
	I1017 19:27:52.126590  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.126601  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:52.126607  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:52.126676  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:52.158449  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:52.158473  306747 cri.go:89] found id: ""
	I1017 19:27:52.158482  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:52.158543  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.162572  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:52.162647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:52.192007  306747 cri.go:89] found id: ""
	I1017 19:27:52.192033  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.192042  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:52.192052  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:52.192066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:52.209934  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:52.209966  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:52.285387  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:52.276095    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.276908    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.278520    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.279497    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.280119    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:52.276095    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.276908    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.278520    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.279497    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.280119    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:52.285410  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:52.285426  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:52.314784  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:52.314812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:52.349858  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:52.349896  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:52.417120  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:52.417160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:52.447498  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:52.447525  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:52.525405  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:52.525442  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:52.568336  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:52.568364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:52.667592  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:52.667629  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:52.714508  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:52.714544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.241965  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:55.252843  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:55.252914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:55.281150  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:55.281173  306747 cri.go:89] found id: ""
	I1017 19:27:55.281181  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:55.281254  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.285436  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:55.285508  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:55.311561  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:55.311585  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:55.311590  306747 cri.go:89] found id: ""
	I1017 19:27:55.311598  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:55.311654  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.315303  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.318720  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:55.318789  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:55.342910  306747 cri.go:89] found id: ""
	I1017 19:27:55.342937  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.342946  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:55.342953  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:55.343012  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:55.369108  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:55.369130  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:55.369136  306747 cri.go:89] found id: ""
	I1017 19:27:55.369154  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:55.369212  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.372980  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.376499  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:55.376598  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:55.409872  306747 cri.go:89] found id: ""
	I1017 19:27:55.409898  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.409907  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:55.409914  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:55.409970  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:55.435703  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.435725  306747 cri.go:89] found id: ""
	I1017 19:27:55.435734  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:55.435787  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.439520  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:55.439587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:55.466991  306747 cri.go:89] found id: ""
	I1017 19:27:55.467017  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.467026  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:55.467036  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:55.467048  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.492985  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:55.493014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:55.566914  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:55.566950  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:55.643727  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:55.635444    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.636184    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.637061    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638074    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638650    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:55.635444    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.636184    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.637061    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638074    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638650    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:55.643796  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:55.643817  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:55.670365  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:55.670394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:55.705898  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:55.705936  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:55.732124  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:55.732152  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:55.762958  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:55.762987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:55.857491  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:55.857528  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:55.875620  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:55.875658  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:55.953454  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:55.953501  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:58.520452  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:58.530935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:58.531015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:58.557433  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:58.557455  306747 cri.go:89] found id: ""
	I1017 19:27:58.557464  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:58.557521  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.561276  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:58.561345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:58.587982  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:58.588006  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:58.588011  306747 cri.go:89] found id: ""
	I1017 19:27:58.588018  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:58.588072  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.591894  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.595410  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:58.595490  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:58.620930  306747 cri.go:89] found id: ""
	I1017 19:27:58.620956  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.620966  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:58.620972  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:58.621038  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:58.646484  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:58.646509  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:58.646514  306747 cri.go:89] found id: ""
	I1017 19:27:58.646522  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:58.646573  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.650281  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.653491  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:58.653564  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:58.679227  306747 cri.go:89] found id: ""
	I1017 19:27:58.679251  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.679261  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:58.679271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:58.679329  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:58.712878  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:58.712901  306747 cri.go:89] found id: ""
	I1017 19:27:58.712910  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:58.712965  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.717668  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:58.717744  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:58.743926  306747 cri.go:89] found id: ""
	I1017 19:27:58.743950  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.743960  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:58.743969  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:58.743981  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:58.816251  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:58.808176    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.809065    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810666    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810959    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.812492    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:58.808176    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.809065    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810666    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810959    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.812492    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:58.816275  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:58.816289  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:58.880149  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:58.880187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:58.926347  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:58.926379  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:58.959298  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:58.959326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:58.985914  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:58.985941  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:59.060169  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:59.060206  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:59.098174  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:59.098204  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:59.193263  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:59.193298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:59.223428  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:59.223461  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:59.282679  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:59.282714  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:01.802237  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:01.814388  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:01.814466  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:01.840376  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:01.840398  306747 cri.go:89] found id: ""
	I1017 19:28:01.840412  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:01.840465  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.844426  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:01.844496  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:01.873063  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:01.873085  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:01.873090  306747 cri.go:89] found id: ""
	I1017 19:28:01.873098  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:01.873155  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.877190  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.881085  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:01.881173  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:01.908701  306747 cri.go:89] found id: ""
	I1017 19:28:01.908726  306747 logs.go:282] 0 containers: []
	W1017 19:28:01.908736  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:01.908742  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:01.908799  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:01.936306  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:01.936330  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:01.936335  306747 cri.go:89] found id: ""
	I1017 19:28:01.936343  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:01.936397  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.940768  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.946060  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:01.946131  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:01.974191  306747 cri.go:89] found id: ""
	I1017 19:28:01.974217  306747 logs.go:282] 0 containers: []
	W1017 19:28:01.974227  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:01.974234  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:01.974299  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:02.003021  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:02.003047  306747 cri.go:89] found id: ""
	I1017 19:28:02.003056  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:02.003132  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:02.016728  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:02.016803  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:02.046662  306747 cri.go:89] found id: ""
	I1017 19:28:02.046688  306747 logs.go:282] 0 containers: []
	W1017 19:28:02.046697  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:02.046708  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:02.046744  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:02.076638  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:02.076670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:02.097353  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:02.097384  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:02.149812  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:02.149852  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:02.212958  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:02.212995  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:02.242664  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:02.242692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:02.329225  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:02.329262  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:02.364870  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:02.364906  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:02.472339  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:02.472377  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:02.541865  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:02.533392    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.534027    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.535792    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.536454    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.537580    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:02.533392    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.534027    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.535792    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.536454    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.537580    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:02.541887  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:02.541900  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:02.570859  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:02.570888  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.110395  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:05.121645  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:05.121716  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:05.153742  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:05.153766  306747 cri.go:89] found id: ""
	I1017 19:28:05.153775  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:05.153829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.157576  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:05.157647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:05.184788  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:05.184810  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.184815  306747 cri.go:89] found id: ""
	I1017 19:28:05.184823  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:05.184878  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.188586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.192151  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:05.192222  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:05.222405  306747 cri.go:89] found id: ""
	I1017 19:28:05.222437  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.222447  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:05.222453  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:05.222512  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:05.251383  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:05.251408  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:05.251413  306747 cri.go:89] found id: ""
	I1017 19:28:05.251421  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:05.251474  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.255443  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.258903  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:05.258971  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:05.289906  306747 cri.go:89] found id: ""
	I1017 19:28:05.289983  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.289999  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:05.290007  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:05.290065  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:05.317057  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:05.317122  306747 cri.go:89] found id: ""
	I1017 19:28:05.317136  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:05.317202  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.320997  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:05.321071  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:05.350310  306747 cri.go:89] found id: ""
	I1017 19:28:05.350335  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.350344  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:05.350353  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:05.350364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:05.387607  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:05.387637  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:05.456949  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:05.448355    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.449098    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.450777    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.451358    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.452970    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:05.448355    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.449098    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.450777    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.451358    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.452970    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:05.457018  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:05.457045  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:05.484064  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:05.484139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:05.543816  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:05.543851  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:05.573032  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:05.573058  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:05.651816  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:05.651853  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:05.753730  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:05.753765  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:05.772288  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:05.772320  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:05.827946  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:05.827982  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.872696  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:05.872731  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.406970  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:08.417284  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:08.417352  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:08.443772  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:08.443796  306747 cri.go:89] found id: ""
	I1017 19:28:08.443815  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:08.443868  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.447541  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:08.447633  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:08.472976  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:08.473004  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:08.473009  306747 cri.go:89] found id: ""
	I1017 19:28:08.473017  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:08.473070  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.476664  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.480025  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:08.480095  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:08.507100  306747 cri.go:89] found id: ""
	I1017 19:28:08.507122  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.507130  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:08.507136  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:08.507194  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:08.532864  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:08.532888  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.532895  306747 cri.go:89] found id: ""
	I1017 19:28:08.532912  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:08.532966  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.536602  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.540037  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:08.540108  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:08.566233  306747 cri.go:89] found id: ""
	I1017 19:28:08.566258  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.566267  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:08.566273  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:08.566348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:08.593545  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:08.593568  306747 cri.go:89] found id: ""
	I1017 19:28:08.593577  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:08.593630  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.597170  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:08.597251  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:08.622805  306747 cri.go:89] found id: ""
	I1017 19:28:08.622829  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.622838  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:08.622847  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:08.622886  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:08.718117  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:08.718158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:08.736317  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:08.736358  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:08.785165  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:08.785200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.813123  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:08.813154  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:08.842670  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:08.842698  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:08.883049  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:08.883081  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:08.948658  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:08.940826    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.941602    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943150    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943452    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.944921    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:08.940826    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.941602    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943150    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943452    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.944921    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:08.948680  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:08.948693  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:08.975235  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:08.975261  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:09.023572  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:09.023607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:09.085674  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:09.085713  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:11.674341  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:11.684867  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:11.684937  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:11.710235  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:11.710258  306747 cri.go:89] found id: ""
	I1017 19:28:11.710266  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:11.710317  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.713823  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:11.713893  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:11.743536  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:11.743557  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:11.743564  306747 cri.go:89] found id: ""
	I1017 19:28:11.743571  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:11.743623  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.747225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.750360  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:11.750423  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:11.775489  306747 cri.go:89] found id: ""
	I1017 19:28:11.775553  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.775575  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:11.775599  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:11.775689  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:11.804973  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:11.804993  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:11.804999  306747 cri.go:89] found id: ""
	I1017 19:28:11.805007  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:11.805064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.809085  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.812425  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:11.812493  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:11.839019  306747 cri.go:89] found id: ""
	I1017 19:28:11.839042  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.839051  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:11.839057  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:11.839113  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:11.867946  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:11.868012  306747 cri.go:89] found id: ""
	I1017 19:28:11.868036  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:11.868125  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.871735  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:11.871847  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:11.917369  306747 cri.go:89] found id: ""
	I1017 19:28:11.917435  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.917448  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:11.917458  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:11.917473  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:12.015837  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:12.015876  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:12.037612  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:12.037645  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:12.066665  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:12.066695  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:12.124283  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:12.124321  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:12.157456  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:12.157487  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:12.218566  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:12.218603  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:12.246576  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:12.246601  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:12.323228  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:12.323263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:12.389358  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:12.381335    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.382085    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.383576    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.384016    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.385432    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:12.381335    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.382085    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.383576    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.384016    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.385432    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:12.389381  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:12.389394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:12.420218  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:12.420248  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:14.967518  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:14.978398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:14.978489  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:15.008833  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:15.008861  306747 cri.go:89] found id: ""
	I1017 19:28:15.008869  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:15.008962  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.019024  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:15.019115  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:15.048619  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:15.048641  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:15.048646  306747 cri.go:89] found id: ""
	I1017 19:28:15.048653  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:15.048711  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.052829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.056849  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:15.056960  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:15.090614  306747 cri.go:89] found id: ""
	I1017 19:28:15.090646  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.090670  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:15.090679  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:15.090755  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:15.121287  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:15.121354  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:15.121367  306747 cri.go:89] found id: ""
	I1017 19:28:15.121376  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:15.121441  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.126749  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.130705  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:15.130786  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:15.158437  306747 cri.go:89] found id: ""
	I1017 19:28:15.158462  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.158472  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:15.158479  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:15.158542  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:15.187795  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:15.187819  306747 cri.go:89] found id: ""
	I1017 19:28:15.187828  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:15.187885  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.191939  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:15.192014  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:15.221830  306747 cri.go:89] found id: ""
	I1017 19:28:15.221856  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.221866  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:15.221875  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:15.221886  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:15.314949  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:15.314983  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:15.334443  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:15.334524  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:15.391124  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:15.391159  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:15.464757  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:15.464794  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:15.499089  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:15.499118  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:15.572721  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:15.572758  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:15.604780  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:15.604809  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:15.673978  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:15.665870    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.666574    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668276    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668888    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.670272    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:15.665870    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.666574    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668276    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668888    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.670272    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:15.674001  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:15.674014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:15.703550  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:15.703577  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:15.736137  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:15.736167  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.272459  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:18.284130  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:18.284202  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:18.317045  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:18.317114  306747 cri.go:89] found id: ""
	I1017 19:28:18.317140  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:18.317200  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.320946  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:18.321021  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:18.349966  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:18.350047  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:18.350069  306747 cri.go:89] found id: ""
	I1017 19:28:18.350078  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:18.350146  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.354094  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.357736  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:18.357840  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:18.389890  306747 cri.go:89] found id: ""
	I1017 19:28:18.389914  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.389923  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:18.389929  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:18.389990  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:18.416552  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:18.416573  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.416577  306747 cri.go:89] found id: ""
	I1017 19:28:18.416584  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:18.416636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.421408  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.425021  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:18.425127  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:18.451716  306747 cri.go:89] found id: ""
	I1017 19:28:18.451744  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.451754  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:18.451760  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:18.451824  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:18.486286  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:18.486355  306747 cri.go:89] found id: ""
	I1017 19:28:18.486370  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:18.486424  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.490097  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:18.490214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:18.517834  306747 cri.go:89] found id: ""
	I1017 19:28:18.517859  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.517868  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:18.517877  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:18.517907  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:18.569373  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:18.569412  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:18.597414  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:18.597442  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:18.615623  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:18.615651  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:18.687384  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:18.679364    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.680188    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.681715    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.682200    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.683729    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:18.679364    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.680188    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.681715    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.682200    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.683729    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:18.687406  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:18.687420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:18.724107  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:18.724135  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:18.757798  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:18.757832  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:18.823518  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:18.823556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.868332  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:18.868358  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:18.948355  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:18.948391  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:18.980022  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:18.980052  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:21.580647  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:21.591760  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:21.591828  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:21.619734  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:21.619755  306747 cri.go:89] found id: ""
	I1017 19:28:21.619763  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:21.619822  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.623634  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:21.623706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:21.650174  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:21.650202  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:21.650207  306747 cri.go:89] found id: ""
	I1017 19:28:21.650215  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:21.650275  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.654337  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.658320  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:21.658390  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:21.685562  306747 cri.go:89] found id: ""
	I1017 19:28:21.685587  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.685596  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:21.685602  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:21.685696  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:21.711151  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:21.711175  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:21.711180  306747 cri.go:89] found id: ""
	I1017 19:28:21.711188  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:21.711241  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.714981  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.718517  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:21.718587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:21.745770  306747 cri.go:89] found id: ""
	I1017 19:28:21.745796  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.745805  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:21.745812  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:21.745872  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:21.773020  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:21.773042  306747 cri.go:89] found id: ""
	I1017 19:28:21.773052  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:21.773107  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.776980  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:21.777073  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:21.805110  306747 cri.go:89] found id: ""
	I1017 19:28:21.805137  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.805146  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:21.805156  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:21.805187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:21.915295  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:21.915339  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:21.934521  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:21.934553  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:21.971829  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:21.971867  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:22.032460  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:22.032500  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:22.069813  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:22.069901  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:22.150515  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:22.150553  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:22.186817  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:22.186843  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:22.250982  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:22.242783    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.243418    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.244975    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.245572    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.247184    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:22.242783    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.243418    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.244975    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.245572    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.247184    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:22.251005  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:22.251019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:22.318367  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:22.318403  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:22.359962  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:22.359991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:24.888496  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:24.899632  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:24.899701  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:24.927106  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:24.927126  306747 cri.go:89] found id: ""
	I1017 19:28:24.927135  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:24.927191  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.930789  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:24.930901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:24.957962  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:24.957986  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:24.957992  306747 cri.go:89] found id: ""
	I1017 19:28:24.958000  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:24.958052  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.961689  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.965312  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:24.965388  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:24.999567  306747 cri.go:89] found id: ""
	I1017 19:28:24.999646  306747 logs.go:282] 0 containers: []
	W1017 19:28:24.999670  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:24.999692  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:24.999784  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:25.030377  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:25.030447  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:25.030466  306747 cri.go:89] found id: ""
	I1017 19:28:25.030493  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:25.030587  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.034492  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.038213  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:25.038307  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:25.064926  306747 cri.go:89] found id: ""
	I1017 19:28:25.065005  306747 logs.go:282] 0 containers: []
	W1017 19:28:25.065022  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:25.065029  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:25.065092  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:25.104761  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:25.104835  306747 cri.go:89] found id: ""
	I1017 19:28:25.104851  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:25.104908  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.109062  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:25.109153  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:25.137891  306747 cri.go:89] found id: ""
	I1017 19:28:25.137923  306747 logs.go:282] 0 containers: []
	W1017 19:28:25.137931  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:25.137940  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:25.137953  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:25.170975  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:25.171007  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:25.204002  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:25.204031  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:25.297840  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:25.297914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:25.315642  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:25.315682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:25.369974  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:25.370011  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:25.452713  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:25.452749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:25.483409  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:25.483439  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:25.558385  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:25.550412    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.551034    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.552731    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.553294    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.554883    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:25.550412    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.551034    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.552731    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.553294    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.554883    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:25.558408  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:25.558421  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:25.585961  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:25.585989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:25.617689  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:25.617720  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.181797  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:28.193078  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:28.193193  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:28.220858  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:28.220880  306747 cri.go:89] found id: ""
	I1017 19:28:28.220889  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:28.220949  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.224889  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:28.224962  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:28.256761  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:28.256782  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:28.256787  306747 cri.go:89] found id: ""
	I1017 19:28:28.256795  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:28.256849  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.261049  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.264952  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:28.265076  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:28.291441  306747 cri.go:89] found id: ""
	I1017 19:28:28.291509  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.291533  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:28.291556  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:28.291641  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:28.318704  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.318768  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:28.318790  306747 cri.go:89] found id: ""
	I1017 19:28:28.318815  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:28.318904  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.323349  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.327034  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:28.327096  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:28.357958  306747 cri.go:89] found id: ""
	I1017 19:28:28.357983  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.357992  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:28.358001  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:28.358059  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:28.384163  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:28.384187  306747 cri.go:89] found id: ""
	I1017 19:28:28.384196  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:28.384262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.387976  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:28.388088  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:28.414600  306747 cri.go:89] found id: ""
	I1017 19:28:28.414625  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.414635  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:28.414644  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:28.414655  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:28.478712  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:28.469484    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.470334    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.472333    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.473060    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.474868    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:28.469484    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.470334    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.472333    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.473060    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.474868    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:28.478736  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:28.478749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:28.504392  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:28.504432  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.566111  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:28.566147  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:28.597513  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:28.597544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:28.676314  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:28.676352  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:28.779140  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:28.779181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:28.830823  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:28.830858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:28.873192  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:28.873224  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:28.907594  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:28.907621  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:28.939159  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:28.939188  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:31.457173  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:31.468390  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:31.468462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:31.500159  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:31.500183  306747 cri.go:89] found id: ""
	I1017 19:28:31.500191  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:31.500245  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.503981  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:31.504051  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:31.529707  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:31.529735  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:31.529740  306747 cri.go:89] found id: ""
	I1017 19:28:31.529748  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:31.529810  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.533478  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.536973  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:31.537042  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:31.562894  306747 cri.go:89] found id: ""
	I1017 19:28:31.562920  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.562929  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:31.562936  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:31.562996  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:31.591920  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:31.591943  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:31.591949  306747 cri.go:89] found id: ""
	I1017 19:28:31.591956  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:31.592011  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.595596  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.598999  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:31.599093  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:31.631142  306747 cri.go:89] found id: ""
	I1017 19:28:31.631164  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.631173  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:31.631179  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:31.631264  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:31.657995  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:31.658017  306747 cri.go:89] found id: ""
	I1017 19:28:31.658026  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:31.658077  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.661797  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:31.661866  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:31.687995  306747 cri.go:89] found id: ""
	I1017 19:28:31.688019  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.688028  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:31.688037  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:31.688049  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:31.714258  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:31.714288  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:31.743480  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:31.743510  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:31.839126  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:31.839165  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:31.865944  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:31.865971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:31.923800  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:31.923834  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:32.015198  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:32.015258  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:32.108618  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:32.108656  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:32.127026  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:32.127056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:32.197465  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:32.189288    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.190038    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191643    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191956    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.193464    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:32.189288    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.190038    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191643    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191956    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.193464    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:32.197487  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:32.197501  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:32.230297  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:32.230333  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:34.763313  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:34.773938  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:34.774008  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:34.801473  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:34.801491  306747 cri.go:89] found id: ""
	I1017 19:28:34.801498  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:34.801568  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.805380  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:34.805451  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:34.831939  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:34.831964  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:34.831968  306747 cri.go:89] found id: ""
	I1017 19:28:34.831976  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:34.832034  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.836223  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.839881  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:34.839985  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:34.867700  306747 cri.go:89] found id: ""
	I1017 19:28:34.867725  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.867735  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:34.867741  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:34.867826  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:34.898720  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:34.898743  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:34.898748  306747 cri.go:89] found id: ""
	I1017 19:28:34.898756  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:34.898827  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.902459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.905896  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:34.905974  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:34.933166  306747 cri.go:89] found id: ""
	I1017 19:28:34.933242  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.933258  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:34.933266  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:34.933326  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:34.961978  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:34.962067  306747 cri.go:89] found id: ""
	I1017 19:28:34.962091  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:34.962173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.966069  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:34.966147  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:34.993526  306747 cri.go:89] found id: ""
	I1017 19:28:34.993565  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.993574  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:34.993583  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:34.993594  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:35.023086  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:35.023173  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:35.057614  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:35.057652  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:35.126909  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:35.126944  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:35.207646  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:35.207681  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:35.240791  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:35.240824  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:35.259253  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:35.259285  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:35.327544  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:35.319793    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.320443    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.321977    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.322405    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.323890    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:35.319793    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.320443    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.321977    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.322405    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.323890    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:35.327566  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:35.327579  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:35.377112  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:35.377150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:35.405892  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:35.405920  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:35.431201  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:35.431230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:38.030766  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:38.042946  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:38.043015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:38.074181  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:38.074215  306747 cri.go:89] found id: ""
	I1017 19:28:38.074224  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:38.074287  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.079011  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:38.079083  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:38.108493  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:38.108592  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:38.108612  306747 cri.go:89] found id: ""
	I1017 19:28:38.108636  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:38.108721  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.112489  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.115918  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:38.116030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:38.146192  306747 cri.go:89] found id: ""
	I1017 19:28:38.146215  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.146225  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:38.146233  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:38.146315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:38.178299  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:38.178363  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:38.178375  306747 cri.go:89] found id: ""
	I1017 19:28:38.178382  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:38.178438  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.182144  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.185723  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:38.185785  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:38.210486  306747 cri.go:89] found id: ""
	I1017 19:28:38.210509  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.210518  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:38.210524  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:38.210578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:38.240550  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:38.240573  306747 cri.go:89] found id: ""
	I1017 19:28:38.240581  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:38.240633  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.246616  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:38.246710  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:38.272684  306747 cri.go:89] found id: ""
	I1017 19:28:38.272710  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.272719  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:38.272728  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:38.272759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:38.291309  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:38.291338  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:38.362093  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:38.354481    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.355177    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.356720    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.357017    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.358292    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:38.354481    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.355177    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.356720    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.357017    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.358292    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:38.362115  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:38.362136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:38.388487  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:38.388541  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:38.460507  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:38.460545  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:38.493438  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:38.493472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:38.519348  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:38.519378  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:38.547771  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:38.547800  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:38.646739  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:38.646779  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:38.711727  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:38.711765  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:38.794605  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:38.794645  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:41.329100  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:41.340102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:41.340191  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:41.378237  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:41.378304  306747 cri.go:89] found id: ""
	I1017 19:28:41.378327  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:41.378411  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.382295  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:41.382433  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:41.413432  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:41.413454  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:41.413459  306747 cri.go:89] found id: ""
	I1017 19:28:41.413483  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:41.413541  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.417349  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.420940  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:41.421030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:41.447730  306747 cri.go:89] found id: ""
	I1017 19:28:41.447754  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.447763  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:41.447769  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:41.447917  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:41.473491  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:41.473514  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:41.473520  306747 cri.go:89] found id: ""
	I1017 19:28:41.473527  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:41.473602  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.477615  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.481139  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:41.481211  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:41.507258  306747 cri.go:89] found id: ""
	I1017 19:28:41.507283  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.507292  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:41.507300  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:41.507356  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:41.537051  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:41.537073  306747 cri.go:89] found id: ""
	I1017 19:28:41.537082  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:41.537134  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.540852  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:41.540920  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:41.567361  306747 cri.go:89] found id: ""
	I1017 19:28:41.567389  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.567398  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:41.567407  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:41.567419  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:41.599142  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:41.599172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:41.635743  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:41.635773  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:41.654302  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:41.654331  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:41.717143  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:41.717179  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:41.792345  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:41.792380  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:41.871479  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:41.871517  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:41.975433  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:41.975512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:42.054059  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:42.044191    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.045351    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.046050    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.047965    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.048651    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:42.044191    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.045351    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.046050    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.047965    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.048651    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:42.054083  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:42.054106  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:42.089914  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:42.089944  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:42.149148  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:42.149200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:44.709425  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:44.719908  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:44.719977  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:44.763510  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:44.763534  306747 cri.go:89] found id: ""
	I1017 19:28:44.763541  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:44.763594  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.767241  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:44.767313  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:44.795651  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:44.795675  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:44.795681  306747 cri.go:89] found id: ""
	I1017 19:28:44.795689  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:44.795742  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.800272  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.804452  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:44.804565  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:44.839339  306747 cri.go:89] found id: ""
	I1017 19:28:44.839371  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.839379  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:44.839386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:44.839452  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:44.875066  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:44.875099  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:44.875105  306747 cri.go:89] found id: ""
	I1017 19:28:44.875139  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:44.875214  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.880309  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.883914  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:44.884020  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:44.917517  306747 cri.go:89] found id: ""
	I1017 19:28:44.917586  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.917614  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:44.917638  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:44.917727  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:44.946317  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:44.946393  306747 cri.go:89] found id: ""
	I1017 19:28:44.946416  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:44.946496  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.950194  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:44.950311  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:44.976935  306747 cri.go:89] found id: ""
	I1017 19:28:44.977000  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.977027  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:44.977054  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:44.977071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:45.083362  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:45.083465  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:45.185240  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:45.174155    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.175051    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.176949    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178114    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178917    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:45.174155    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.175051    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.176949    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178114    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178917    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:45.185281  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:45.185298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:45.229219  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:45.229247  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:45.303101  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:45.303141  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:45.395057  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:45.395208  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:45.422882  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:45.422938  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:45.465002  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:45.465035  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:45.501568  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:45.501600  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:45.530952  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:45.530983  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:45.610519  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:45.610560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:48.146542  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:48.158014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:48.158095  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:48.185610  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:48.185676  306747 cri.go:89] found id: ""
	I1017 19:28:48.185699  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:48.185773  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.189874  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:48.189975  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:48.216931  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:48.216997  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:48.217020  306747 cri.go:89] found id: ""
	I1017 19:28:48.217044  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:48.217112  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.220961  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.224622  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:48.224715  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:48.254633  306747 cri.go:89] found id: ""
	I1017 19:28:48.254660  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.254669  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:48.254676  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:48.254759  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:48.280918  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:48.280996  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:48.281017  306747 cri.go:89] found id: ""
	I1017 19:28:48.281033  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:48.281101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.285444  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.289246  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:48.289369  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:48.317150  306747 cri.go:89] found id: ""
	I1017 19:28:48.317216  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.317244  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:48.317275  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:48.317350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:48.347609  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:48.347643  306747 cri.go:89] found id: ""
	I1017 19:28:48.347652  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:48.347704  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.351509  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:48.351584  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:48.376680  306747 cri.go:89] found id: ""
	I1017 19:28:48.376708  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.376716  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:48.376726  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:48.376738  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:48.452752  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:48.452788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:48.484352  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:48.484382  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:48.510315  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:48.510344  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:48.571544  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:48.571578  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:48.609922  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:48.609951  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:48.642129  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:48.642158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:48.737103  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:48.737139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:48.755251  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:48.755324  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:48.826596  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:48.817740    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.818885    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.819683    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.820717    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.821339    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:48.817740    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.818885    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.819683    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.820717    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.821339    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:48.826621  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:48.826676  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:48.917412  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:48.917447  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.447884  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:51.458905  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:51.458975  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:51.486341  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:51.486364  306747 cri.go:89] found id: ""
	I1017 19:28:51.486373  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:51.486435  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.490132  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:51.490214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:51.515926  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:51.515950  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:51.515956  306747 cri.go:89] found id: ""
	I1017 19:28:51.515964  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:51.516033  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.520421  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.524078  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:51.524150  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:51.558659  306747 cri.go:89] found id: ""
	I1017 19:28:51.558683  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.558693  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:51.558700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:51.558754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:51.584326  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:51.584349  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:51.584355  306747 cri.go:89] found id: ""
	I1017 19:28:51.584362  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:51.584417  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.588059  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.591616  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:51.591692  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:51.621537  306747 cri.go:89] found id: ""
	I1017 19:28:51.621562  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.621571  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:51.621577  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:51.621634  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:51.648966  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.648994  306747 cri.go:89] found id: ""
	I1017 19:28:51.649002  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:51.649064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.652867  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:51.652934  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:51.685921  306747 cri.go:89] found id: ""
	I1017 19:28:51.685944  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.685953  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:51.685962  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:51.685973  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:51.759988  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:51.760023  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:51.846069  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:51.835717    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.836264    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.837776    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.840665    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.841647    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:51.835717    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.836264    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.837776    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.840665    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.841647    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:51.846090  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:51.846105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.875253  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:51.875281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:51.929449  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:51.929478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:52.036309  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:52.036348  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:52.054743  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:52.054772  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:52.088833  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:52.088860  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:52.157298  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:52.157332  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:52.199361  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:52.199392  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:52.268239  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:52.268286  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:54.799369  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:54.809961  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:54.810031  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:54.836137  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:54.836157  306747 cri.go:89] found id: ""
	I1017 19:28:54.836167  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:54.836220  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.839841  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:54.839912  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:54.873358  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:54.873379  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:54.873383  306747 cri.go:89] found id: ""
	I1017 19:28:54.873391  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:54.873445  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.877284  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.881090  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:54.881164  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:54.908431  306747 cri.go:89] found id: ""
	I1017 19:28:54.908456  306747 logs.go:282] 0 containers: []
	W1017 19:28:54.908465  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:54.908471  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:54.908607  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:54.935825  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:54.935845  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:54.935850  306747 cri.go:89] found id: ""
	I1017 19:28:54.935857  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:54.935913  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.939621  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.943502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:54.943577  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:54.973718  306747 cri.go:89] found id: ""
	I1017 19:28:54.973742  306747 logs.go:282] 0 containers: []
	W1017 19:28:54.973751  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:54.973757  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:54.973818  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:55.004781  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:55.004802  306747 cri.go:89] found id: ""
	I1017 19:28:55.004818  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:55.004885  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:55.015050  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:55.015136  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:55.043899  306747 cri.go:89] found id: ""
	I1017 19:28:55.043966  306747 logs.go:282] 0 containers: []
	W1017 19:28:55.043988  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:55.044013  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:55.044056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:55.097224  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:55.097263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:55.126143  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:55.126175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:55.170272  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:55.170302  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:55.190816  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:55.190846  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:55.229778  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:55.229815  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:55.296882  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:55.296954  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:55.322920  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:55.322960  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:55.398513  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:55.398549  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:55.499678  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:55.499714  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:55.563984  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:55.555178    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.556013    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.557806    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.558580    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.560270    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:55.555178    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.556013    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.557806    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.558580    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.560270    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:55.564010  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:55.564024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.090313  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:58.101520  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:58.101590  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:58.135133  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.135155  306747 cri.go:89] found id: ""
	I1017 19:28:58.135165  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:58.135217  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.139309  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:58.139381  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:58.166722  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:58.166743  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:58.166749  306747 cri.go:89] found id: ""
	I1017 19:28:58.166757  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:58.166829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.170644  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.174541  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:58.174614  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:58.200707  306747 cri.go:89] found id: ""
	I1017 19:28:58.200733  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.200741  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:58.200748  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:58.200802  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:58.227069  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:58.227090  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:58.227095  306747 cri.go:89] found id: ""
	I1017 19:28:58.227102  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:58.227153  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.230793  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.234187  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:58.234268  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:58.260228  306747 cri.go:89] found id: ""
	I1017 19:28:58.260255  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.260264  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:58.260271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:58.260330  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:58.287560  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:58.287582  306747 cri.go:89] found id: ""
	I1017 19:28:58.287590  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:58.287642  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.291431  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:58.291498  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:58.319091  306747 cri.go:89] found id: ""
	I1017 19:28:58.319116  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.319125  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:58.319133  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:58.319144  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:58.357128  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:58.357156  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:58.457940  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:58.457987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:58.477285  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:58.477363  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:58.553846  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:58.545334    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.546110    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.547791    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.548153    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.549602    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:58.545334    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.546110    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.547791    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.548153    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.549602    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:58.553942  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:58.553987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.588733  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:58.588806  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:58.615167  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:58.615234  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:58.668448  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:58.668480  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:58.701507  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:58.701539  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:58.772475  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:58.772512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:58.800891  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:58.800921  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:01.380664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:01.397862  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:01.397929  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:01.438317  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:01.438341  306747 cri.go:89] found id: ""
	I1017 19:29:01.438349  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:01.438408  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.448585  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:01.448665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:01.480947  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:01.480971  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:01.480978  306747 cri.go:89] found id: ""
	I1017 19:29:01.480985  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:01.481040  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.488101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.493426  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:01.493541  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:01.529725  306747 cri.go:89] found id: ""
	I1017 19:29:01.529759  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.529767  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:01.529803  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:01.529888  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:01.570078  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:01.570130  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:01.570162  306747 cri.go:89] found id: ""
	I1017 19:29:01.570347  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:01.570572  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.580262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.584761  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:01.584865  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:01.619278  306747 cri.go:89] found id: ""
	I1017 19:29:01.619316  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.619326  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:01.619460  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:01.619709  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:01.668374  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:01.668398  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:01.668404  306747 cri.go:89] found id: ""
	I1017 19:29:01.668411  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:01.668500  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.672629  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.676472  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:01.676559  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:01.718877  306747 cri.go:89] found id: ""
	I1017 19:29:01.718901  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.718911  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:01.718979  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:01.719003  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:01.786370  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:01.786448  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:01.835925  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:01.836009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:01.936969  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:01.937000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:01.985828  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:01.985857  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:02.036057  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:02.036090  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:02.088571  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:02.088600  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:02.183054  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:02.174539    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.175524    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177270    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177576    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.179060    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:02.174539    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.175524    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177270    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177576    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.179060    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:02.183078  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:02.183094  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:02.214988  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:02.215019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:02.246207  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:02.246238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:02.338642  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:02.338682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:02.473356  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:02.473435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:04.994292  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:05.005817  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:05.005900  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:05.038175  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:05.038208  306747 cri.go:89] found id: ""
	I1017 19:29:05.038217  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:05.038276  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.042122  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:05.042193  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:05.072245  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:05.072271  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:05.072277  306747 cri.go:89] found id: ""
	I1017 19:29:05.072290  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:05.072369  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.085415  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.089790  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:05.089901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:05.126026  306747 cri.go:89] found id: ""
	I1017 19:29:05.126051  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.126059  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:05.126065  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:05.126129  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:05.157653  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:05.157689  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:05.157694  306747 cri.go:89] found id: ""
	I1017 19:29:05.157708  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:05.157780  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.162134  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.166047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:05.166134  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:05.201222  306747 cri.go:89] found id: ""
	I1017 19:29:05.201247  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.201266  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:05.201291  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:05.201364  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:05.228323  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:05.228343  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:05.228348  306747 cri.go:89] found id: ""
	I1017 19:29:05.228355  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:05.228413  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.232758  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.236321  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:05.236407  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:05.264094  306747 cri.go:89] found id: ""
	I1017 19:29:05.264119  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.264128  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:05.264137  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:05.264150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:05.289719  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:05.289749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:05.341596  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:05.341632  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:05.385650  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:05.385681  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:05.455993  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:05.456032  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:05.482902  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:05.482967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:05.561357  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:05.561393  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:05.662914  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:05.662948  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:05.681986  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:05.682019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:05.709932  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:05.709959  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:05.745521  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:05.745548  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:05.780007  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:05.780039  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:05.861169  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:05.844357    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.845194    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.846708    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.847144    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.849138    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:05.844357    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.845194    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.846708    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.847144    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.849138    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:08.361828  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:08.372509  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:08.372609  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:08.398614  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:08.398638  306747 cri.go:89] found id: ""
	I1017 19:29:08.398646  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:08.398707  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.402221  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:08.402294  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:08.426256  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:08.426278  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:08.426284  306747 cri.go:89] found id: ""
	I1017 19:29:08.426291  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:08.426341  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.429916  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.433518  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:08.433587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:08.460461  306747 cri.go:89] found id: ""
	I1017 19:29:08.460487  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.460495  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:08.460502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:08.460591  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:08.488509  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:08.488562  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:08.488568  306747 cri.go:89] found id: ""
	I1017 19:29:08.488576  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:08.488628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.492158  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.495581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:08.495647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:08.524899  306747 cri.go:89] found id: ""
	I1017 19:29:08.524920  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.524928  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:08.524934  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:08.524997  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:08.552958  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:08.552979  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:08.552984  306747 cri.go:89] found id: ""
	I1017 19:29:08.552991  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:08.553045  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.557091  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.560618  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:08.560683  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:08.587418  306747 cri.go:89] found id: ""
	I1017 19:29:08.587495  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.587517  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:08.587557  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:08.587586  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:08.617740  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:08.617768  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:08.691709  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:08.691747  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:08.710175  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:08.710209  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:08.777270  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:08.777305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:08.810729  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:08.810754  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:08.861497  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:08.861524  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:08.964232  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:08.964270  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:09.042894  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:09.034262    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.034773    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.036444    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.037159    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.038877    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:09.034262    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.034773    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.036444    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.037159    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.038877    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:09.042916  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:09.042941  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:09.067822  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:09.067849  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:09.107723  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:09.107755  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:09.186115  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:09.186151  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:11.716134  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:11.726531  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:11.726597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:11.752711  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:11.752733  306747 cri.go:89] found id: ""
	I1017 19:29:11.752741  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:11.752795  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.756278  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:11.756366  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:11.786396  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:11.786424  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:11.786430  306747 cri.go:89] found id: ""
	I1017 19:29:11.786439  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:11.786523  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.790327  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.794284  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:11.794350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:11.826413  306747 cri.go:89] found id: ""
	I1017 19:29:11.826437  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.826446  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:11.826452  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:11.826507  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:11.861782  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:11.861855  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:11.861875  306747 cri.go:89] found id: ""
	I1017 19:29:11.861900  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:11.861986  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.866376  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.870040  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:11.870106  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:11.902703  306747 cri.go:89] found id: ""
	I1017 19:29:11.902725  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.902739  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:11.902745  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:11.902803  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:11.932072  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:11.932141  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:11.932161  306747 cri.go:89] found id: ""
	I1017 19:29:11.932186  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:11.932273  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.935981  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.939489  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:11.939560  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:11.975511  306747 cri.go:89] found id: ""
	I1017 19:29:11.975535  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.975544  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:11.975553  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:11.975565  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:12.003072  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:12.003107  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:12.038364  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:12.038400  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:12.116412  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:12.116450  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:12.147738  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:12.147766  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:12.245018  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:12.245053  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:12.262566  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:12.262641  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:12.312750  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:12.312785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:12.349963  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:12.349991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:12.419426  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:12.411356    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.411861    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.413495    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.414181    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.415507    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:12.411356    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.411861    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.413495    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.414181    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.415507    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:12.419456  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:12.419472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:12.444065  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:12.444093  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:12.511165  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:12.511200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.042908  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:15.054321  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:15.054394  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:15.089860  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:15.089886  306747 cri.go:89] found id: ""
	I1017 19:29:15.089895  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:15.089951  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.093678  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:15.093788  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:15.121746  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:15.121771  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:15.121776  306747 cri.go:89] found id: ""
	I1017 19:29:15.121784  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:15.121839  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.125790  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.129470  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:15.129544  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:15.156564  306747 cri.go:89] found id: ""
	I1017 19:29:15.156591  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.156600  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:15.156606  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:15.156665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:15.189983  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:15.190010  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.190015  306747 cri.go:89] found id: ""
	I1017 19:29:15.190023  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:15.190113  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.194081  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.197983  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:15.198087  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:15.224673  306747 cri.go:89] found id: ""
	I1017 19:29:15.224701  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.224710  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:15.224716  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:15.224776  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:15.250249  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:15.250272  306747 cri.go:89] found id: ""
	I1017 19:29:15.250280  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:15.250336  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.254014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:15.254080  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:15.281235  306747 cri.go:89] found id: ""
	I1017 19:29:15.281313  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.281337  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:15.281363  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:15.281395  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:15.385553  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:15.385599  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:15.411962  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:15.411991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:15.455045  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:15.455073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:15.527131  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:15.527170  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:15.554497  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:15.554527  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:15.587137  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:15.587164  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:15.604763  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:15.604794  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:15.679834  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:15.670121    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.670686    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672157    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672558    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.674247    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:15.670121    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.670686    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672157    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672558    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.674247    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:15.679857  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:15.679870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:15.734902  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:15.734947  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.764734  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:15.764760  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:18.342635  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:18.353361  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:18.353435  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:18.380287  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:18.380311  306747 cri.go:89] found id: ""
	I1017 19:29:18.380319  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:18.380371  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.384298  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:18.384372  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:18.410566  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:18.410585  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:18.410590  306747 cri.go:89] found id: ""
	I1017 19:29:18.410597  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:18.410651  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.414392  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.417897  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:18.417969  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:18.447960  306747 cri.go:89] found id: ""
	I1017 19:29:18.447984  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.447992  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:18.447999  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:18.448054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:18.474020  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:18.474043  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:18.474049  306747 cri.go:89] found id: ""
	I1017 19:29:18.474059  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:18.474117  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.477723  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.481031  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:18.481111  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:18.508003  306747 cri.go:89] found id: ""
	I1017 19:29:18.508026  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.508034  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:18.508040  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:18.508123  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:18.535988  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:18.536017  306747 cri.go:89] found id: ""
	I1017 19:29:18.536026  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:18.536114  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.539822  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:18.539919  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:18.565247  306747 cri.go:89] found id: ""
	I1017 19:29:18.565271  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.565279  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:18.565287  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:18.565340  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:18.590409  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:18.590435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:18.664546  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:18.664583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:18.720073  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:18.720102  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:18.818026  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:18.818065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:18.838304  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:18.838335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:18.923376  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:18.914478    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.915271    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.916962    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.917666    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.919294    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:18.914478    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.915271    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.916962    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.917666    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.919294    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:18.923400  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:18.923413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:18.958683  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:18.958723  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:18.993098  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:18.993125  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:19.020011  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:19.020054  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:19.072525  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:19.072558  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:21.648626  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:21.658854  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:21.658923  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:21.686357  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:21.686380  306747 cri.go:89] found id: ""
	I1017 19:29:21.686388  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:21.686440  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.690383  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:21.690455  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:21.716829  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:21.716849  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:21.716854  306747 cri.go:89] found id: ""
	I1017 19:29:21.716861  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:21.716918  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.720495  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.723948  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:21.724016  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:21.751438  306747 cri.go:89] found id: ""
	I1017 19:29:21.751462  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.751471  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:21.751478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:21.751540  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:21.777499  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:21.777526  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:21.777531  306747 cri.go:89] found id: ""
	I1017 19:29:21.777539  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:21.777597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.781539  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.785454  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:21.785568  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:21.816183  306747 cri.go:89] found id: ""
	I1017 19:29:21.816248  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.816270  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:21.816292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:21.816377  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:21.854603  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:21.854670  306747 cri.go:89] found id: ""
	I1017 19:29:21.854695  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:21.854779  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.860948  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:21.861028  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:21.899847  306747 cri.go:89] found id: ""
	I1017 19:29:21.899871  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.899879  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:21.899887  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:21.899899  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:21.958460  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:21.958497  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:22.040921  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:22.040958  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:22.070331  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:22.070410  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:22.149286  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:22.149326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:22.180733  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:22.180761  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:22.199492  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:22.199531  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:22.272753  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:22.265010    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.265612    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267150    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267571    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.269051    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:22.265010    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.265612    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267150    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267571    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.269051    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:22.272779  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:22.272792  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:22.299733  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:22.299761  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:22.342105  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:22.342137  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:22.369741  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:22.369780  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:24.966101  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:24.976635  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:24.976715  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:25.022230  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:25.022256  306747 cri.go:89] found id: ""
	I1017 19:29:25.022267  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:25.022330  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.026476  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:25.026548  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:25.056264  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:25.056282  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:25.056287  306747 cri.go:89] found id: ""
	I1017 19:29:25.056295  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:25.056345  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.061372  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.064965  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:25.065034  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:25.104703  306747 cri.go:89] found id: ""
	I1017 19:29:25.104725  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.104734  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:25.104739  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:25.104799  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:25.137104  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:25.137128  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:25.137134  306747 cri.go:89] found id: ""
	I1017 19:29:25.137142  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:25.137197  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.141057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.144695  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:25.144771  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:25.171838  306747 cri.go:89] found id: ""
	I1017 19:29:25.171861  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.171870  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:25.171876  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:25.171935  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:25.204227  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:25.204251  306747 cri.go:89] found id: ""
	I1017 19:29:25.204259  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:25.204312  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.208502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:25.208632  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:25.234929  306747 cri.go:89] found id: ""
	I1017 19:29:25.235003  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.235020  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:25.235030  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:25.235043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:25.272163  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:25.272192  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:25.370863  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:25.370900  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:25.411966  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:25.412009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:25.479240  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:25.479276  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:25.506577  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:25.506606  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:25.580671  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:25.580706  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:25.614033  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:25.614061  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:25.631893  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:25.631922  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:25.703391  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:25.694870    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.695646    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697219    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697740    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.699431    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:25.694870    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.695646    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697219    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697740    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.699431    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:25.703420  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:25.703449  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:25.729186  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:25.729213  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.281561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:28.292670  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:28.292764  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:28.321689  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:28.321709  306747 cri.go:89] found id: ""
	I1017 19:29:28.321718  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:28.321791  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.325401  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:28.325491  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:28.353611  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.353636  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:28.353642  306747 cri.go:89] found id: ""
	I1017 19:29:28.353649  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:28.353708  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.357789  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.361132  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:28.361209  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:28.388364  306747 cri.go:89] found id: ""
	I1017 19:29:28.388392  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.388401  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:28.388408  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:28.388471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:28.414080  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:28.414105  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:28.414111  306747 cri.go:89] found id: ""
	I1017 19:29:28.414119  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:28.414176  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.417894  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.421494  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:28.421617  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:28.448583  306747 cri.go:89] found id: ""
	I1017 19:29:28.448611  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.448620  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:28.448626  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:28.448683  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:28.481175  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:28.481198  306747 cri.go:89] found id: ""
	I1017 19:29:28.481208  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:28.481262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.485099  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:28.485212  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:28.511543  306747 cri.go:89] found id: ""
	I1017 19:29:28.511569  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.511577  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:28.511586  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:28.511617  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:28.606473  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:28.606511  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:28.626545  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:28.626577  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:28.697168  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:28.689422    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.690138    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.691704    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.692016    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.693514    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:28.689422    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.690138    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.691704    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.692016    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.693514    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:28.697191  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:28.697204  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.750046  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:28.750080  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:28.818139  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:28.818172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:28.847832  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:28.847916  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:28.928453  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:28.928489  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:28.959160  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:28.959188  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:28.986346  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:28.986374  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:29.037329  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:29.037364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:31.569631  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:31.580386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:31.580488  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:31.606748  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:31.606776  306747 cri.go:89] found id: ""
	I1017 19:29:31.606786  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:31.606861  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.610709  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:31.610808  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:31.637721  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:31.637742  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:31.637747  306747 cri.go:89] found id: ""
	I1017 19:29:31.637754  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:31.637831  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.641550  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.644918  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:31.644994  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:31.671222  306747 cri.go:89] found id: ""
	I1017 19:29:31.671248  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.671257  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:31.671263  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:31.671320  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:31.698318  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:31.698341  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:31.698347  306747 cri.go:89] found id: ""
	I1017 19:29:31.698354  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:31.698409  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.702033  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.705305  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:31.705406  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:31.733910  306747 cri.go:89] found id: ""
	I1017 19:29:31.733940  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.733949  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:31.733956  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:31.734012  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:31.759712  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:31.759743  306747 cri.go:89] found id: ""
	I1017 19:29:31.759752  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:31.759802  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.763496  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:31.763571  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:31.789631  306747 cri.go:89] found id: ""
	I1017 19:29:31.789656  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.789665  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:31.789684  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:31.789701  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:31.907913  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:31.907961  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:31.927231  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:31.927316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:32.018355  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:32.018394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:32.062156  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:32.062194  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:32.153927  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:32.153962  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:32.187982  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:32.188010  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:32.258773  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:32.251239    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.251763    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253326    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253710    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.255187    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:32.251239    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.251763    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253326    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253710    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.255187    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:32.258796  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:32.258835  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:32.290660  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:32.290689  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:32.368997  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:32.369029  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:32.400957  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:32.400988  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:34.933742  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:34.945067  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:34.945160  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:34.975919  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:34.975944  306747 cri.go:89] found id: ""
	I1017 19:29:34.975952  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:34.976011  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:34.979876  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:34.979963  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:35.007426  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:35.007451  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:35.007456  306747 cri.go:89] found id: ""
	I1017 19:29:35.007464  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:35.007526  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.013588  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.018178  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:35.018277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:35.048204  306747 cri.go:89] found id: ""
	I1017 19:29:35.048239  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.048248  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:35.048255  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:35.048315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:35.083329  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:35.083352  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:35.083358  306747 cri.go:89] found id: ""
	I1017 19:29:35.083366  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:35.083430  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.088406  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.094362  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:35.094435  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:35.125078  306747 cri.go:89] found id: ""
	I1017 19:29:35.125160  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.125185  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:35.125198  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:35.125277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:35.153519  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:35.153543  306747 cri.go:89] found id: ""
	I1017 19:29:35.153552  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:35.153605  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.157388  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:35.157485  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:35.189018  306747 cri.go:89] found id: ""
	I1017 19:29:35.189086  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.189113  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:35.189142  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:35.189185  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:35.290719  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:35.290763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:35.310771  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:35.310803  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:35.386443  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:35.376912    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.377784    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379400    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379730    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.381228    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:35.376912    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.377784    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379400    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379730    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.381228    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:35.386470  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:35.386484  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:35.442234  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:35.442274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:35.480866  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:35.480896  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:35.549288  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:35.549326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:35.576073  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:35.576102  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:35.611273  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:35.611308  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:35.639731  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:35.639763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:35.671118  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:35.671148  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:38.244668  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:38.257170  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:38.257244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:38.283218  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:38.283238  306747 cri.go:89] found id: ""
	I1017 19:29:38.283247  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:38.283305  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.287299  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:38.287365  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:38.314528  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:38.314550  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:38.314555  306747 cri.go:89] found id: ""
	I1017 19:29:38.314563  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:38.314614  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.318298  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.321948  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:38.322042  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:38.349464  306747 cri.go:89] found id: ""
	I1017 19:29:38.349503  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.349516  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:38.349538  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:38.349626  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:38.379503  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:38.379565  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:38.379583  306747 cri.go:89] found id: ""
	I1017 19:29:38.379608  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:38.379675  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.383360  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.387192  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:38.387298  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:38.421165  306747 cri.go:89] found id: ""
	I1017 19:29:38.421190  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.421199  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:38.421205  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:38.421293  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:38.449443  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:38.449509  306747 cri.go:89] found id: ""
	I1017 19:29:38.449530  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:38.449608  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.453406  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:38.453530  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:38.480577  306747 cri.go:89] found id: ""
	I1017 19:29:38.480640  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.480662  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:38.480687  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:38.480712  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:38.558339  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:38.558375  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:38.588992  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:38.589018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:38.688443  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:38.688478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:38.705940  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:38.706012  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:38.738810  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:38.738836  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:38.765665  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:38.765693  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:38.841021  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:38.831886    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.832670    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.834636    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.835450    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.837074    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:38.831886    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.832670    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.834636    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.835450    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.837074    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:38.841095  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:38.841115  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:38.870763  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:38.870791  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:38.943129  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:38.943162  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:38.984504  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:38.984583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.577128  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:41.588152  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:41.588230  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:41.616214  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:41.616251  306747 cri.go:89] found id: ""
	I1017 19:29:41.616261  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:41.616333  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.620228  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:41.620301  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:41.647140  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:41.647166  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:41.647172  306747 cri.go:89] found id: ""
	I1017 19:29:41.647180  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:41.647241  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.650918  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.654626  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:41.654701  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:41.680974  306747 cri.go:89] found id: ""
	I1017 19:29:41.680999  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.681008  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:41.681014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:41.681071  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:41.707036  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.707071  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:41.707076  306747 cri.go:89] found id: ""
	I1017 19:29:41.707084  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:41.707137  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.710947  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.714920  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:41.715001  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:41.741927  306747 cri.go:89] found id: ""
	I1017 19:29:41.741952  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.741962  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:41.741968  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:41.742026  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:41.766904  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:41.766928  306747 cri.go:89] found id: ""
	I1017 19:29:41.766936  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:41.766989  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.770640  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:41.770722  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:41.797979  306747 cri.go:89] found id: ""
	I1017 19:29:41.798007  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.798017  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:41.798026  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:41.798038  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:41.815570  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:41.815602  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:41.872205  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:41.872246  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:41.910906  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:41.910942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.996670  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:41.996709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:42.033766  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:42.033804  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:42.143006  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:42.143055  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:42.258670  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:42.246629    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.247190    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.249238    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.250318    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.251136    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:42.246629    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.247190    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.249238    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.250318    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.251136    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:42.258694  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:42.258709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:42.294390  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:42.294422  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:42.328168  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:42.328202  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:42.357875  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:42.357932  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:44.934951  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:44.945451  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:44.945522  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:44.979178  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:44.979201  306747 cri.go:89] found id: ""
	I1017 19:29:44.979209  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:44.979263  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:44.983046  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:44.983126  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:45.035414  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:45.035438  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:45.035443  306747 cri.go:89] found id: ""
	I1017 19:29:45.035451  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:45.035519  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.048433  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.053636  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:45.053716  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:45.120373  306747 cri.go:89] found id: ""
	I1017 19:29:45.120397  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.120406  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:45.120414  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:45.120482  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:45.167585  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:45.167667  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:45.167692  306747 cri.go:89] found id: ""
	I1017 19:29:45.167719  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:45.167819  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.173369  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.178434  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:45.178531  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:45.220087  306747 cri.go:89] found id: ""
	I1017 19:29:45.220115  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.220125  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:45.220132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:45.220222  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:45.275433  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:45.275475  306747 cri.go:89] found id: ""
	I1017 19:29:45.275484  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:45.275559  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.281184  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:45.281323  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:45.323004  306747 cri.go:89] found id: ""
	I1017 19:29:45.323106  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.323137  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:45.323188  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:45.323238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:45.371491  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:45.371598  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:45.464170  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:45.455221    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.456745    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.457962    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.458630    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.460252    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:45.455221    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.456745    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.457962    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.458630    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.460252    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:45.464194  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:45.464206  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:45.499416  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:45.499445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:45.536994  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:45.537028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:45.615136  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:45.615172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:45.720244  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:45.720281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:45.778577  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:45.778610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:45.859732  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:45.859813  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:45.896812  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:45.896889  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:45.929734  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:45.929763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:48.461978  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:48.472688  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:48.472759  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:48.499995  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:48.500019  306747 cri.go:89] found id: ""
	I1017 19:29:48.500028  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:48.500084  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.504256  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:48.504330  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:48.533568  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:48.533627  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:48.533647  306747 cri.go:89] found id: ""
	I1017 19:29:48.533662  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:48.533722  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.538269  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.542307  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:48.542388  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:48.572286  306747 cri.go:89] found id: ""
	I1017 19:29:48.572355  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.572379  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:48.572405  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:48.572499  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:48.599218  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:48.599246  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:48.599251  306747 cri.go:89] found id: ""
	I1017 19:29:48.599259  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:48.599310  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.603036  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.606361  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:48.606471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:48.631930  306747 cri.go:89] found id: ""
	I1017 19:29:48.631966  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.631975  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:48.631982  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:48.632052  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:48.658684  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:48.658711  306747 cri.go:89] found id: ""
	I1017 19:29:48.658720  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:48.658773  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.662512  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:48.662586  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:48.688997  306747 cri.go:89] found id: ""
	I1017 19:29:48.689022  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.689031  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:48.689041  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:48.689052  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:48.789868  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:48.789919  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:48.860960  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:48.850451    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.851072    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852664    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852967    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.854822    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:48.850451    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.851072    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852664    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852967    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.854822    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:48.860984  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:48.861000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:48.933293  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:48.933334  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:48.961662  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:48.961692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:48.998503  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:48.998533  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:49.030219  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:49.030292  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:49.048915  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:49.048949  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:49.075217  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:49.075256  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:49.132824  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:49.132859  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:49.166233  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:49.166269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:51.747014  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:51.757581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:51.757655  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:51.783413  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:51.783436  306747 cri.go:89] found id: ""
	I1017 19:29:51.783444  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:51.783499  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.787489  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:51.787553  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:51.815381  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:51.815404  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:51.815408  306747 cri.go:89] found id: ""
	I1017 19:29:51.815415  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:51.815467  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.819345  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.822754  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:51.822830  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:51.863882  306747 cri.go:89] found id: ""
	I1017 19:29:51.863922  306747 logs.go:282] 0 containers: []
	W1017 19:29:51.863931  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:51.863937  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:51.863997  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:51.896342  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:51.896414  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:51.896433  306747 cri.go:89] found id: ""
	I1017 19:29:51.896457  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:51.896574  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.900688  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.905025  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:51.905156  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:51.950302  306747 cri.go:89] found id: ""
	I1017 19:29:51.950325  306747 logs.go:282] 0 containers: []
	W1017 19:29:51.950333  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:51.950339  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:51.950408  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:51.984143  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:51.984164  306747 cri.go:89] found id: ""
	I1017 19:29:51.984172  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:51.984225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.988312  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:51.988387  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:52.024692  306747 cri.go:89] found id: ""
	I1017 19:29:52.024720  306747 logs.go:282] 0 containers: []
	W1017 19:29:52.024729  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:52.024738  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:52.024750  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:52.043591  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:52.043708  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:52.083962  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:52.084045  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:52.156858  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:52.149368    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.149750    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151218    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151521    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.152949    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:52.149368    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.149750    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151218    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151521    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.152949    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:52.156879  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:52.156894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:52.183367  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:52.183396  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:52.244364  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:52.244445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:52.277850  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:52.277883  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:52.363433  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:52.363473  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:52.392573  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:52.392602  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:52.421470  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:52.421499  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:52.502975  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:52.503014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:55.106386  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:55.118281  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:55.118357  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:55.147588  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:55.147612  306747 cri.go:89] found id: ""
	I1017 19:29:55.147625  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:55.147679  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.151460  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:55.151530  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:55.179417  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:55.179441  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:55.179447  306747 cri.go:89] found id: ""
	I1017 19:29:55.179455  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:55.179512  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.184062  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.187762  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:55.187876  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:55.214159  306747 cri.go:89] found id: ""
	I1017 19:29:55.214187  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.214196  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:55.214203  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:55.214268  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:55.244963  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:55.244987  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:55.244992  306747 cri.go:89] found id: ""
	I1017 19:29:55.244999  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:55.245052  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.250157  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.256061  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:55.256151  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:55.287091  306747 cri.go:89] found id: ""
	I1017 19:29:55.287114  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.287122  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:55.287128  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:55.287192  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:55.316175  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:55.316245  306747 cri.go:89] found id: ""
	I1017 19:29:55.316268  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:55.316359  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.321292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:55.321374  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:55.348125  306747 cri.go:89] found id: ""
	I1017 19:29:55.348151  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.348160  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:55.348169  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:55.348181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:55.380783  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:55.380812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:55.414351  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:55.414386  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:55.484774  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:55.475182    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.476192    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478010    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478543    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.480183    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:55.475182    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.476192    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478010    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478543    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.480183    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:55.484796  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:55.484809  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:55.556984  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:55.557018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:55.625177  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:55.625251  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:55.655370  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:55.655398  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:55.680829  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:55.680860  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:55.763300  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:55.763331  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:55.803920  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:55.803954  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:55.900738  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:55.900773  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:58.422801  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:58.433443  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:58.433516  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:58.464116  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:58.464136  306747 cri.go:89] found id: ""
	I1017 19:29:58.464144  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:58.464212  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.468047  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:58.468169  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:58.494945  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:58.494979  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:58.494985  306747 cri.go:89] found id: ""
	I1017 19:29:58.494993  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:58.495058  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.498896  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.502320  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:58.502386  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:58.531527  306747 cri.go:89] found id: ""
	I1017 19:29:58.531550  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.531558  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:58.531564  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:58.531623  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:58.558316  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:58.558337  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:58.558342  306747 cri.go:89] found id: ""
	I1017 19:29:58.558350  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:58.558403  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.562311  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.565856  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:58.565960  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:58.591130  306747 cri.go:89] found id: ""
	I1017 19:29:58.591156  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.591164  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:58.591173  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:58.591229  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:58.618142  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:58.618221  306747 cri.go:89] found id: ""
	I1017 19:29:58.618237  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:58.618297  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.621817  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:58.621888  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:58.651258  306747 cri.go:89] found id: ""
	I1017 19:29:58.651284  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.651293  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:58.651302  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:58.651315  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:58.720909  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:58.720942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:58.748703  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:58.748729  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:58.776433  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:58.776463  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:58.851007  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:58.851041  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:58.884351  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:58.884382  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:58.957941  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:58.949361    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.950154    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.951742    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.952330    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.954025    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:58.949361    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.950154    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.951742    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.952330    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.954025    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:58.957961  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:58.957974  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:58.987459  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:58.987531  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:59.026978  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:59.027008  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:59.128822  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:59.128858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:59.146047  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:59.146079  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:01.705070  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:01.718647  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:01.718748  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:01.753347  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:01.753387  306747 cri.go:89] found id: ""
	I1017 19:30:01.753395  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:01.753457  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.757741  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:01.757850  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:01.786783  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:01.786861  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:01.786873  306747 cri.go:89] found id: ""
	I1017 19:30:01.786882  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:01.787029  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.791549  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.796677  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:01.796752  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:01.826434  306747 cri.go:89] found id: ""
	I1017 19:30:01.826462  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.826472  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:01.826478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:01.826543  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:01.863544  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:01.863569  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:01.863574  306747 cri.go:89] found id: ""
	I1017 19:30:01.863582  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:01.863639  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.867992  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.872125  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:01.872206  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:01.908249  306747 cri.go:89] found id: ""
	I1017 19:30:01.908276  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.908285  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:01.908292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:01.908354  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:01.936971  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:01.937001  306747 cri.go:89] found id: ""
	I1017 19:30:01.937010  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:01.937105  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.941357  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:01.941426  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:01.982542  306747 cri.go:89] found id: ""
	I1017 19:30:01.982569  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.982578  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:01.982593  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:01.982606  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:02.018942  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:02.018970  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:02.099513  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:02.099556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:02.137502  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:02.137532  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:02.185697  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:02.185738  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:02.288795  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:02.288835  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:02.336210  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:02.336248  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:02.422878  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:02.422917  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:02.453635  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:02.453662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:02.540123  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:02.540164  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:02.558457  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:02.558491  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:02.629161  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:02.619096   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.619981   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.621652   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.622279   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.624619   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:02.619096   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.619981   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.621652   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.622279   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.624619   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:05.130448  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:05.144120  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:05.144214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:05.175291  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:05.175324  306747 cri.go:89] found id: ""
	I1017 19:30:05.175334  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:05.175394  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.179428  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:05.179514  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:05.212486  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:05.212511  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:05.212541  306747 cri.go:89] found id: ""
	I1017 19:30:05.212550  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:05.212606  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.216463  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.220220  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:05.220295  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:05.249597  306747 cri.go:89] found id: ""
	I1017 19:30:05.249624  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.249633  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:05.249640  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:05.249706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:05.276856  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:05.276878  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:05.276883  306747 cri.go:89] found id: ""
	I1017 19:30:05.276890  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:05.276945  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.280586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.284132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:05.284196  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:05.312051  306747 cri.go:89] found id: ""
	I1017 19:30:05.312081  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.312090  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:05.312096  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:05.312154  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:05.339324  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:05.339345  306747 cri.go:89] found id: ""
	I1017 19:30:05.339353  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:05.339406  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.343274  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:05.343351  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:05.371042  306747 cri.go:89] found id: ""
	I1017 19:30:05.371067  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.371076  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:05.371086  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:05.371103  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:05.395923  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:05.395957  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:05.453746  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:05.453785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:05.495400  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:05.495436  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:05.522354  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:05.522384  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:05.603168  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:05.603203  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:05.635130  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:05.635158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:05.730159  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:05.730196  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:05.805436  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:05.797321   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.798191   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.799878   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.800180   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.801717   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:05.797321   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.798191   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.799878   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.800180   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.801717   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:05.805458  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:05.805471  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:05.831415  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:05.831453  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:05.915270  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:05.915309  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:08.445553  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:08.457157  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:08.457224  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:08.489306  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:08.489335  306747 cri.go:89] found id: ""
	I1017 19:30:08.489344  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:08.489399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.493424  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:08.493497  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:08.523021  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:08.523056  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:08.523061  306747 cri.go:89] found id: ""
	I1017 19:30:08.523069  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:08.523133  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.527165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.530929  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:08.531043  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:08.560240  306747 cri.go:89] found id: ""
	I1017 19:30:08.560266  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.560275  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:08.560282  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:08.560340  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:08.587950  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:08.587974  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:08.587979  306747 cri.go:89] found id: ""
	I1017 19:30:08.587987  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:08.588048  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.591797  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.595627  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:08.595710  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:08.622023  306747 cri.go:89] found id: ""
	I1017 19:30:08.622048  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.622057  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:08.622064  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:08.622123  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:08.652098  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:08.652194  306747 cri.go:89] found id: ""
	I1017 19:30:08.652232  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:08.652399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.657095  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:08.657180  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:08.687380  306747 cri.go:89] found id: ""
	I1017 19:30:08.687404  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.687412  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:08.687421  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:08.687433  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:08.785046  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:08.785084  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:08.815287  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:08.815318  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:08.880972  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:08.881008  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:08.919918  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:08.919947  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:08.994592  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:08.994632  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:09.029806  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:09.029833  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:09.059196  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:09.059224  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:09.077625  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:09.077658  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:09.155722  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:09.147557   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.148286   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.149973   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.150565   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.152238   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:09.147557   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.148286   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.149973   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.150565   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.152238   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:09.155746  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:09.155759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:09.230856  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:09.230895  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:11.763218  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:11.774210  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:11.774310  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:11.807759  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:11.807778  306747 cri.go:89] found id: ""
	I1017 19:30:11.807786  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:11.807840  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.812129  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:11.812202  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:11.840430  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:11.840451  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:11.840459  306747 cri.go:89] found id: ""
	I1017 19:30:11.840467  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:11.840562  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.844491  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.848972  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:11.849065  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:11.876962  306747 cri.go:89] found id: ""
	I1017 19:30:11.876986  306747 logs.go:282] 0 containers: []
	W1017 19:30:11.876994  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:11.877000  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:11.877060  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:11.907338  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:11.907402  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:11.907421  306747 cri.go:89] found id: ""
	I1017 19:30:11.907446  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:11.907534  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.911700  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.915708  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:11.915823  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:11.945931  306747 cri.go:89] found id: ""
	I1017 19:30:11.945968  306747 logs.go:282] 0 containers: []
	W1017 19:30:11.945976  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:11.945983  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:11.946041  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:11.973489  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:11.973509  306747 cri.go:89] found id: ""
	I1017 19:30:11.973517  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:11.973582  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.979325  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:11.979401  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:12.006387  306747 cri.go:89] found id: ""
	I1017 19:30:12.006415  306747 logs.go:282] 0 containers: []
	W1017 19:30:12.006425  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:12.006437  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:12.006452  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:12.112142  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:12.112180  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:12.130633  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:12.130662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:12.219234  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:12.204079   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.204586   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.208545   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.212324   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.214784   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:12.204079   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.204586   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.208545   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.212324   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.214784   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:12.219259  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:12.219274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:12.248889  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:12.248918  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:12.284961  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:12.284995  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:12.360893  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:12.360930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:12.394406  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:12.394433  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:12.420215  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:12.420245  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:12.477947  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:12.477980  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:12.559952  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:12.559989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:15.098061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:15.110601  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:15.110673  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:15.142831  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:15.142854  306747 cri.go:89] found id: ""
	I1017 19:30:15.142863  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:15.142922  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.147216  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:15.147336  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:15.177462  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:15.177487  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:15.177492  306747 cri.go:89] found id: ""
	I1017 19:30:15.177500  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:15.177556  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.182001  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.186668  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:15.186752  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:15.218350  306747 cri.go:89] found id: ""
	I1017 19:30:15.218375  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.218383  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:15.218389  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:15.218449  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:15.247656  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:15.247730  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:15.247750  306747 cri.go:89] found id: ""
	I1017 19:30:15.247774  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:15.247847  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.251499  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.254966  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:15.255039  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:15.282034  306747 cri.go:89] found id: ""
	I1017 19:30:15.282056  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.282065  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:15.282071  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:15.282131  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:15.313582  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:15.313643  306747 cri.go:89] found id: ""
	I1017 19:30:15.313665  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:15.313739  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.317325  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:15.317407  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:15.343894  306747 cri.go:89] found id: ""
	I1017 19:30:15.343921  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.343937  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:15.343947  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:15.343967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:15.416772  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:15.408215   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.409020   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410494   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410798   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.412827   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:15.408215   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.409020   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410494   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410798   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.412827   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:15.416794  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:15.416807  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:15.455991  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:15.456060  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:15.533107  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:15.533144  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:15.605424  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:15.605464  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:15.633544  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:15.633572  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:15.710509  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:15.710545  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:15.744271  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:15.744352  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:15.844584  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:15.844621  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:15.865714  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:15.865745  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:15.910911  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:15.910945  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:18.440664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:18.451576  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:18.451643  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:18.480927  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:18.480948  306747 cri.go:89] found id: ""
	I1017 19:30:18.480956  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:18.481010  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.484797  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:18.484886  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:18.512958  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:18.513034  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:18.513045  306747 cri.go:89] found id: ""
	I1017 19:30:18.513053  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:18.513106  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.516855  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.520298  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:18.520369  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:18.546427  306747 cri.go:89] found id: ""
	I1017 19:30:18.546453  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.546462  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:18.546468  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:18.546532  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:18.573945  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:18.574007  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:18.574021  306747 cri.go:89] found id: ""
	I1017 19:30:18.574030  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:18.574094  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.577681  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.581276  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:18.581357  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:18.607914  306747 cri.go:89] found id: ""
	I1017 19:30:18.607941  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.607950  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:18.607956  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:18.608013  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:18.634762  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:18.634781  306747 cri.go:89] found id: ""
	I1017 19:30:18.634789  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:18.634842  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.638638  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:18.638754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:18.666586  306747 cri.go:89] found id: ""
	I1017 19:30:18.666610  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.666618  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:18.666627  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:18.666639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:18.685607  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:18.685637  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:18.740058  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:18.740088  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:18.816374  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:18.816410  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:18.842654  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:18.842686  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:18.921888  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:18.913390   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.913958   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.915701   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.916258   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.918025   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:18.913390   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.913958   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.915701   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.916258   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.918025   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:18.921914  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:18.921930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:18.948267  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:18.948298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:19.003855  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:19.003894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:19.033396  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:19.033424  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:19.128308  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:19.128353  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:19.162140  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:19.162166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:21.764178  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:21.775522  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:21.775596  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:21.803342  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:21.803367  306747 cri.go:89] found id: ""
	I1017 19:30:21.803377  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:21.803442  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.807522  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:21.807598  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:21.836696  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:21.836720  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:21.836726  306747 cri.go:89] found id: ""
	I1017 19:30:21.836734  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:21.836789  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.840752  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.844455  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:21.844557  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:21.872104  306747 cri.go:89] found id: ""
	I1017 19:30:21.872131  306747 logs.go:282] 0 containers: []
	W1017 19:30:21.872140  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:21.872147  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:21.872210  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:21.908413  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:21.908439  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:21.908448  306747 cri.go:89] found id: ""
	I1017 19:30:21.908455  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:21.908513  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.912640  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.916402  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:21.916476  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:21.950380  306747 cri.go:89] found id: ""
	I1017 19:30:21.950466  306747 logs.go:282] 0 containers: []
	W1017 19:30:21.950498  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:21.950517  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:21.950628  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:21.983152  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:21.983177  306747 cri.go:89] found id: ""
	I1017 19:30:21.983187  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:21.983243  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.986962  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:21.987037  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:22.019909  306747 cri.go:89] found id: ""
	I1017 19:30:22.019935  306747 logs.go:282] 0 containers: []
	W1017 19:30:22.019944  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:22.019953  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:22.019996  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:22.069135  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:22.069175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:22.103886  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:22.103916  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:22.133109  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:22.133136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:22.215579  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:22.215617  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:22.297981  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:22.289181   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.289836   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291072   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291590   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.293032   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:22.289181   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.289836   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291072   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291590   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.293032   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:22.298003  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:22.298017  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:22.373102  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:22.373140  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:22.406083  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:22.406110  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:22.506621  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:22.506659  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:22.526268  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:22.526299  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:22.557755  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:22.557784  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.116647  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:25.128310  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:25.128412  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:25.158258  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:25.158281  306747 cri.go:89] found id: ""
	I1017 19:30:25.158293  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:25.158358  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.162693  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:25.162773  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:25.197276  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.197301  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:25.197307  306747 cri.go:89] found id: ""
	I1017 19:30:25.197315  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:25.197407  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.201342  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.205350  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:25.205422  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:25.233590  306747 cri.go:89] found id: ""
	I1017 19:30:25.233617  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.233627  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:25.233634  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:25.233693  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:25.260459  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:25.260486  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:25.260492  306747 cri.go:89] found id: ""
	I1017 19:30:25.260500  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:25.260582  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.266116  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.269609  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:25.269709  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:25.299945  306747 cri.go:89] found id: ""
	I1017 19:30:25.299970  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.299979  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:25.299986  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:25.300062  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:25.327588  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:25.327611  306747 cri.go:89] found id: ""
	I1017 19:30:25.327619  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:25.327695  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.331614  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:25.331714  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:25.360945  306747 cri.go:89] found id: ""
	I1017 19:30:25.360969  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.360978  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:25.360987  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:25.361018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.419332  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:25.419371  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:25.455422  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:25.455454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:25.533420  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:25.533454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:25.561277  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:25.561303  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:25.589003  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:25.589032  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:25.667191  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:25.667225  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:25.697081  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:25.697108  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:25.796723  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:25.796756  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:25.817825  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:25.817854  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:25.895602  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:25.887039   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.887933   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.889709   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.890373   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.891870   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:25.887039   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.887933   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.889709   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.890373   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.891870   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:25.895626  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:25.895639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:28.421545  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:28.432472  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:28.432573  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:28.461368  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:28.461391  306747 cri.go:89] found id: ""
	I1017 19:30:28.461400  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:28.461454  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.466145  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:28.466221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:28.496790  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:28.496814  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:28.496822  306747 cri.go:89] found id: ""
	I1017 19:30:28.496830  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:28.496886  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.500588  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.504150  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:28.504250  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:28.530114  306747 cri.go:89] found id: ""
	I1017 19:30:28.530141  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.530150  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:28.530157  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:28.530257  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:28.560630  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:28.560660  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:28.560675  306747 cri.go:89] found id: ""
	I1017 19:30:28.560684  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:28.560737  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.564422  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.568093  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:28.568165  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:28.598927  306747 cri.go:89] found id: ""
	I1017 19:30:28.598954  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.598963  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:28.598969  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:28.599075  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:28.625977  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:28.626001  306747 cri.go:89] found id: ""
	I1017 19:30:28.626010  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:28.626090  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.629847  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:28.629929  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:28.656469  306747 cri.go:89] found id: ""
	I1017 19:30:28.656494  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.656503  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:28.656513  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:28.656548  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:28.758826  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:28.758863  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:28.778387  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:28.778416  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:28.845382  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:28.837571   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.838156   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.839753   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.840320   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.841429   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:28.837571   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.838156   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.839753   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.840320   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.841429   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:28.845407  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:28.845420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:28.889092  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:28.889167  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:28.970950  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:28.970986  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:29.003996  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:29.004028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:29.064888  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:29.064926  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:29.105700  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:29.105729  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:29.141040  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:29.141066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:29.224674  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:29.224710  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:31.757505  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:31.767848  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:31.767914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:31.800059  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:31.800082  306747 cri.go:89] found id: ""
	I1017 19:30:31.800093  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:31.800147  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.803723  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:31.803795  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:31.830502  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:31.830525  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:31.830530  306747 cri.go:89] found id: ""
	I1017 19:30:31.830546  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:31.830600  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.834866  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.838218  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:31.838293  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:31.866917  306747 cri.go:89] found id: ""
	I1017 19:30:31.866944  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.866953  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:31.866960  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:31.867015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:31.898652  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:31.898673  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:31.898679  306747 cri.go:89] found id: ""
	I1017 19:30:31.898692  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:31.898745  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.902404  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.905916  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:31.906005  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:31.936988  306747 cri.go:89] found id: ""
	I1017 19:30:31.937055  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.937080  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:31.937103  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:31.937192  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:31.965478  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:31.965506  306747 cri.go:89] found id: ""
	I1017 19:30:31.965515  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:31.965570  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.969541  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:31.969611  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:31.997913  306747 cri.go:89] found id: ""
	I1017 19:30:31.997936  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.997945  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:31.997954  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:31.997967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:32.075635  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:32.076176  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:32.124512  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:32.124607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:32.203895  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:32.203930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:32.237712  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:32.237745  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:32.265784  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:32.265812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:32.296288  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:32.296316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:32.413833  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:32.413869  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:32.431287  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:32.431316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:32.496198  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:32.487969   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.488616   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490480   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490935   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.492578   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:32.487969   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.488616   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490480   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490935   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.492578   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:32.496222  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:32.496238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:32.522527  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:32.522556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.098806  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:35.114025  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:35.114098  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:35.150192  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:35.150215  306747 cri.go:89] found id: ""
	I1017 19:30:35.150224  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:35.150291  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.154431  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:35.154528  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:35.187248  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.187274  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:35.187280  306747 cri.go:89] found id: ""
	I1017 19:30:35.187288  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:35.187342  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.190988  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.194467  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:35.194544  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:35.226183  306747 cri.go:89] found id: ""
	I1017 19:30:35.226209  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.226228  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:35.226277  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:35.226345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:35.254492  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:35.254514  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:35.254532  306747 cri.go:89] found id: ""
	I1017 19:30:35.254542  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:35.254600  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.258515  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.262160  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:35.262245  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:35.290479  306747 cri.go:89] found id: ""
	I1017 19:30:35.290556  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.290573  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:35.290581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:35.290647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:35.320673  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:35.320696  306747 cri.go:89] found id: ""
	I1017 19:30:35.320705  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:35.320760  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.324577  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:35.324650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:35.351615  306747 cri.go:89] found id: ""
	I1017 19:30:35.351643  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.351652  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:35.351662  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:35.351674  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:35.426069  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:35.414413   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.418263   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419343   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419972   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.421885   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:35.414413   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.418263   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419343   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419972   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.421885   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:35.426092  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:35.426105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:35.458415  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:35.458445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.532727  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:35.532763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:35.570789  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:35.570821  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:35.654656  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:35.654691  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:35.682337  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:35.682368  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:35.783217  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:35.783263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:35.809044  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:35.809075  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:35.836181  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:35.836213  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:35.922975  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:35.923013  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:38.460477  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:38.471359  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:38.471462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:38.500899  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:38.500923  306747 cri.go:89] found id: ""
	I1017 19:30:38.500932  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:38.501005  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.505166  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:38.505244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:38.531743  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:38.531766  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:38.531771  306747 cri.go:89] found id: ""
	I1017 19:30:38.531779  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:38.531842  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.535645  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.539501  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:38.539580  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:38.568890  306747 cri.go:89] found id: ""
	I1017 19:30:38.568915  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.568923  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:38.568929  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:38.568989  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:38.594452  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:38.594476  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:38.594482  306747 cri.go:89] found id: ""
	I1017 19:30:38.594490  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:38.594544  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.598456  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.606409  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:38.606483  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:38.632993  306747 cri.go:89] found id: ""
	I1017 19:30:38.633015  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.633024  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:38.633030  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:38.633091  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:38.659776  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:38.659800  306747 cri.go:89] found id: ""
	I1017 19:30:38.659809  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:38.659861  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.663404  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:38.663507  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:38.688978  306747 cri.go:89] found id: ""
	I1017 19:30:38.689003  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.689012  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:38.689021  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:38.689033  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:38.722471  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:38.722497  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:38.800538  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:38.800575  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:38.832423  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:38.832451  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:38.939609  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:38.939648  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:38.959665  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:38.959701  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:39.039314  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:39.030321   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.030924   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.032747   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.033627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.034935   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:39.030321   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.030924   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.032747   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.033627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.034935   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:39.039340  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:39.039355  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:39.113637  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:39.113709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:39.148504  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:39.148662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:39.223019  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:39.223056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:39.253605  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:39.253635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:41.780640  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:41.791876  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:41.791949  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:41.819510  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:41.819583  306747 cri.go:89] found id: ""
	I1017 19:30:41.819606  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:41.819691  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.824390  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:41.824462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:41.856605  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:41.856636  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:41.856642  306747 cri.go:89] found id: ""
	I1017 19:30:41.856649  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:41.856715  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.864466  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.868588  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:41.868666  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:41.903466  306747 cri.go:89] found id: ""
	I1017 19:30:41.903498  306747 logs.go:282] 0 containers: []
	W1017 19:30:41.903507  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:41.903514  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:41.903571  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:41.930657  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:41.930682  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:41.930687  306747 cri.go:89] found id: ""
	I1017 19:30:41.930694  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:41.930749  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.934754  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.938781  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:41.938871  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:41.968280  306747 cri.go:89] found id: ""
	I1017 19:30:41.968306  306747 logs.go:282] 0 containers: []
	W1017 19:30:41.968315  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:41.968322  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:41.968402  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:41.995850  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:41.995931  306747 cri.go:89] found id: ""
	I1017 19:30:41.995955  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:41.996030  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.999630  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:41.999700  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:42.044891  306747 cri.go:89] found id: ""
	I1017 19:30:42.044926  306747 logs.go:282] 0 containers: []
	W1017 19:30:42.044935  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:42.044952  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:42.044971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:42.174128  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:42.174267  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:42.224381  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:42.224413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:42.333478  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:42.333518  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:42.353368  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:42.353403  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:42.391604  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:42.391635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:42.426317  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:42.426347  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:42.503367  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:42.494794   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.495471   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497096   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497695   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.499206   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:42.494794   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.495471   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497096   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497695   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.499206   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:42.503388  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:42.503401  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:42.560324  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:42.560359  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:42.632932  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:42.632968  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:42.665758  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:42.665844  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.196869  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:45.213931  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:45.214024  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:45.259283  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:45.259312  306747 cri.go:89] found id: ""
	I1017 19:30:45.259321  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:45.259390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.265805  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:45.265913  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:45.316071  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:45.316098  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:45.316103  306747 cri.go:89] found id: ""
	I1017 19:30:45.316112  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:45.316178  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.329246  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.342518  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:45.342722  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:45.403649  306747 cri.go:89] found id: ""
	I1017 19:30:45.403681  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.403691  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:45.403700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:45.403771  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:45.436373  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:45.436398  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:45.436404  306747 cri.go:89] found id: ""
	I1017 19:30:45.436412  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:45.436470  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.442171  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.446282  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:45.446378  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:45.480185  306747 cri.go:89] found id: ""
	I1017 19:30:45.480211  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.480269  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:45.480281  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:45.480348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:45.519821  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.519845  306747 cri.go:89] found id: ""
	I1017 19:30:45.519853  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:45.519916  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.523961  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:45.524044  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:45.553268  306747 cri.go:89] found id: ""
	I1017 19:30:45.553295  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.553336  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:45.553353  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:45.553376  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:45.581168  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:45.581199  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:45.659459  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:45.659495  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:45.698325  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:45.698356  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:45.730552  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:45.730578  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.761205  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:45.761233  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:45.859241  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:45.859345  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:45.879219  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:45.879249  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:45.956579  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:45.956613  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:46.038168  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:46.038207  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:46.088885  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:46.088920  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:46.156435  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:46.147068   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.148033   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.149640   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.150155   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.151669   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:46.147068   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.148033   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.149640   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.150155   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.151669   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:48.657371  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:48.668345  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:48.668414  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:48.699974  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:48.699994  306747 cri.go:89] found id: ""
	I1017 19:30:48.700002  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:48.700055  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.703706  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:48.703773  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:48.729231  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:48.729255  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:48.729260  306747 cri.go:89] found id: ""
	I1017 19:30:48.729267  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:48.729347  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.733057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.736560  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:48.736650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:48.769891  306747 cri.go:89] found id: ""
	I1017 19:30:48.769917  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.769925  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:48.769932  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:48.769988  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:48.796614  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:48.796633  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:48.796638  306747 cri.go:89] found id: ""
	I1017 19:30:48.796645  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:48.796697  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.800347  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.803641  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:48.803707  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:48.829352  306747 cri.go:89] found id: ""
	I1017 19:30:48.829375  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.829384  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:48.829390  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:48.829448  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:48.863517  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:48.863542  306747 cri.go:89] found id: ""
	I1017 19:30:48.863551  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:48.863603  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.867339  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:48.867411  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:48.896584  306747 cri.go:89] found id: ""
	I1017 19:30:48.896609  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.896618  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:48.896626  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:48.896639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:48.990111  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:48.990146  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:49.015233  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:49.015265  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:49.040589  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:49.040623  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:49.100203  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:49.100237  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:49.135876  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:49.135909  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:49.168685  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:49.168756  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:49.211941  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:49.212009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:49.278129  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:49.270279   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.271015   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272492   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272926   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.274542   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:49.270279   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.271015   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272492   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272926   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.274542   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:49.278151  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:49.278166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:49.355582  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:49.355620  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:49.385861  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:49.385888  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:51.961962  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:51.973739  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:51.973839  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:52.007060  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:52.007089  306747 cri.go:89] found id: ""
	I1017 19:30:52.007098  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:52.007173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.011950  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:52.012025  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:52.043424  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:52.043445  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:52.043450  306747 cri.go:89] found id: ""
	I1017 19:30:52.043458  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:52.043515  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.048102  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.051750  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:52.051836  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:52.091285  306747 cri.go:89] found id: ""
	I1017 19:30:52.091362  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.091384  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:52.091412  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:52.091533  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:52.120853  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:52.120928  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:52.120947  306747 cri.go:89] found id: ""
	I1017 19:30:52.120962  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:52.121037  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.125047  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.128913  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:52.129029  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:52.155112  306747 cri.go:89] found id: ""
	I1017 19:30:52.155138  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.155147  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:52.155153  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:52.155217  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:52.181654  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:52.181678  306747 cri.go:89] found id: ""
	I1017 19:30:52.181686  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:52.181738  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.185468  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:52.185538  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:52.210532  306747 cri.go:89] found id: ""
	I1017 19:30:52.210558  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.210567  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:52.210577  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:52.210591  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:52.283758  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:52.283793  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:52.321133  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:52.321172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:52.349409  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:52.349440  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:52.454035  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:52.454072  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:52.474228  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:52.474336  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:52.549970  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:52.541938   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.542794   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.543926   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.544704   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.546272   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:52.541938   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.542794   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.543926   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.544704   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.546272   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:52.550045  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:52.550073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:52.637174  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:52.637221  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:52.668341  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:52.668418  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:52.761051  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:52.761091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:52.792065  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:52.792160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.319606  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:55.330935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:55.331008  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:55.358717  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.358739  306747 cri.go:89] found id: ""
	I1017 19:30:55.358747  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:55.358802  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.362654  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:55.362769  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:55.397277  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:55.397301  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:55.397306  306747 cri.go:89] found id: ""
	I1017 19:30:55.397314  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:55.397368  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.401240  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.405131  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:55.405244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:55.432480  306747 cri.go:89] found id: ""
	I1017 19:30:55.432602  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.432627  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:55.432666  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:55.432750  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:55.465240  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:55.465314  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:55.465333  306747 cri.go:89] found id: ""
	I1017 19:30:55.465357  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:55.465448  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.469415  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.473023  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:55.473096  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:55.499608  306747 cri.go:89] found id: ""
	I1017 19:30:55.499681  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.499704  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:55.499724  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:55.499814  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:55.526471  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:55.526494  306747 cri.go:89] found id: ""
	I1017 19:30:55.526502  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:55.526586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.530319  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:55.530395  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:55.558617  306747 cri.go:89] found id: ""
	I1017 19:30:55.558639  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.558647  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:55.558656  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:55.558668  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:55.578357  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:55.578390  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:55.642730  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:55.635023   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.635478   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637010   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637409   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.638832   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:55.635023   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.635478   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637010   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637409   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.638832   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:55.642749  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:55.642763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.673301  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:55.673329  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:55.735266  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:55.735301  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:55.777444  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:55.777474  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:55.891903  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:55.891985  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:55.976455  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:55.976492  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:56.005202  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:56.005238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:56.034021  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:56.034049  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:56.086550  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:56.086581  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:58.687094  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:58.698343  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:58.698420  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:58.737082  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:58.737144  306747 cri.go:89] found id: ""
	I1017 19:30:58.737165  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:58.737251  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.740769  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:58.740830  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:58.768900  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:58.768920  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:58.768931  306747 cri.go:89] found id: ""
	I1017 19:30:58.768938  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:58.768991  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.773597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.777023  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:58.777094  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:58.808627  306747 cri.go:89] found id: ""
	I1017 19:30:58.808654  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.808675  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:58.808681  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:58.808778  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:58.833787  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:58.833810  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:58.833815  306747 cri.go:89] found id: ""
	I1017 19:30:58.833823  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:58.833902  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.837729  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.841076  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:58.841161  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:58.876060  306747 cri.go:89] found id: ""
	I1017 19:30:58.876089  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.876099  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:58.876107  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:58.876189  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:58.906434  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:58.906509  306747 cri.go:89] found id: ""
	I1017 19:30:58.906524  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:58.906598  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.911053  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:58.911127  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:58.936724  306747 cri.go:89] found id: ""
	I1017 19:30:58.936748  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.936757  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:58.936765  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:58.936776  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:59.014607  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:59.014643  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:59.044576  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:59.044655  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:59.124177  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:59.124211  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:59.156709  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:59.156737  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:59.175384  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:59.175413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:59.209100  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:59.209136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:59.235216  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:59.235244  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:59.337596  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:59.337631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:59.405118  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:59.396347   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.396989   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.398679   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.399208   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.400795   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:59.396347   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.396989   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.398679   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.399208   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.400795   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:59.405140  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:59.405153  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:59.431225  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:59.431255  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.008171  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:02.020307  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:02.020387  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:02.051051  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:02.051079  306747 cri.go:89] found id: ""
	I1017 19:31:02.051099  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:02.051161  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.056015  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:02.056088  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:02.089743  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.089817  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:02.089836  306747 cri.go:89] found id: ""
	I1017 19:31:02.089856  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:02.089943  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.093857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.097708  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:02.097837  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:02.123389  306747 cri.go:89] found id: ""
	I1017 19:31:02.123411  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.123420  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:02.123426  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:02.123483  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:02.150505  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:02.150582  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:02.150596  306747 cri.go:89] found id: ""
	I1017 19:31:02.150605  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:02.150681  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.154543  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.158104  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:02.158177  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:02.186868  306747 cri.go:89] found id: ""
	I1017 19:31:02.186895  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.186904  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:02.186911  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:02.186974  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:02.215359  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:02.215426  306747 cri.go:89] found id: ""
	I1017 19:31:02.215451  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:02.215524  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.219153  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:02.219266  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:02.246345  306747 cri.go:89] found id: ""
	I1017 19:31:02.246371  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.246381  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:02.246391  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:02.246402  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:02.280313  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:02.280387  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:02.385786  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:02.385822  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:02.414602  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:02.414679  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:02.492313  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:02.492350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:02.511027  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:02.511067  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:02.590723  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:02.582016   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.582767   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.584046   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.585740   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.586186   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:02.582016   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.582767   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.584046   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.585740   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.586186   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:02.590747  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:02.590762  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.653228  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:02.653264  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:02.687148  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:02.687183  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:02.790229  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:02.790269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:02.819586  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:02.819615  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.355439  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:05.367250  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:05.367353  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:05.393587  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:05.393611  306747 cri.go:89] found id: ""
	I1017 19:31:05.393620  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:05.393674  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.397564  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:05.397685  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:05.423815  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:05.423840  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:05.423845  306747 cri.go:89] found id: ""
	I1017 19:31:05.423853  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:05.423921  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.427632  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.431060  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:05.431129  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:05.457152  306747 cri.go:89] found id: ""
	I1017 19:31:05.457176  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.457186  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:05.457192  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:05.457256  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:05.483757  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:05.483779  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:05.483784  306747 cri.go:89] found id: ""
	I1017 19:31:05.483791  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:05.483845  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.487471  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.490789  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:05.490859  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:05.516653  306747 cri.go:89] found id: ""
	I1017 19:31:05.516676  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.516684  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:05.516690  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:05.516793  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:05.542033  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.542059  306747 cri.go:89] found id: ""
	I1017 19:31:05.542091  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:05.542153  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.545908  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:05.545978  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:05.571870  306747 cri.go:89] found id: ""
	I1017 19:31:05.571892  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.571901  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:05.571909  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:05.571923  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:05.649030  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:05.639899   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.640483   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642053   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642716   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.644399   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:05.639899   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.640483   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642053   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642716   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.644399   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:05.649050  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:05.649062  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:05.677036  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:05.677065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:05.718764  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:05.718795  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:05.803861  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:05.803897  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:05.835788  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:05.835814  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.864823  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:05.864853  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:05.947756  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:05.947788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:05.979938  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:05.980005  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:06.080355  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:06.080392  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:06.104116  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:06.104145  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:08.667177  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:08.677727  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:08.677793  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:08.704338  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:08.704362  306747 cri.go:89] found id: ""
	I1017 19:31:08.704370  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:08.704422  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.707981  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:08.708049  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:08.733111  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:08.733130  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:08.733135  306747 cri.go:89] found id: ""
	I1017 19:31:08.733142  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:08.733201  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.737039  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.740374  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:08.740480  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:08.768239  306747 cri.go:89] found id: ""
	I1017 19:31:08.768307  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.768338  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:08.768381  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:08.768471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:08.795436  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:08.795499  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:08.795524  306747 cri.go:89] found id: ""
	I1017 19:31:08.795537  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:08.795609  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.799450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.803242  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:08.803312  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:08.831323  306747 cri.go:89] found id: ""
	I1017 19:31:08.831348  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.831358  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:08.831364  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:08.831427  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:08.865991  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:08.866014  306747 cri.go:89] found id: ""
	I1017 19:31:08.866022  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:08.866077  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.870085  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:08.870174  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:08.905447  306747 cri.go:89] found id: ""
	I1017 19:31:08.905475  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.905483  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:08.905492  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:08.905504  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:08.988463  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:08.988574  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:09.021674  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:09.021711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:09.050080  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:09.050111  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:09.126939  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:09.126972  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:09.161551  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:09.161580  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:09.179459  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:09.179490  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:09.209038  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:09.209066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:09.271767  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:09.271810  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:09.373919  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:09.373956  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:09.439533  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:09.431442   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.432120   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.433687   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.434214   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.435793   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:09.431442   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.432120   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.433687   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.434214   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.435793   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:09.439556  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:09.439570  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:11.978816  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:11.990102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:11.990174  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:12.023196  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:12.023225  306747 cri.go:89] found id: ""
	I1017 19:31:12.023235  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:12.023302  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.027739  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:12.027832  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:12.055241  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:12.055265  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:12.055270  306747 cri.go:89] found id: ""
	I1017 19:31:12.055278  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:12.055336  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.059592  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.064052  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:12.064121  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:12.103548  306747 cri.go:89] found id: ""
	I1017 19:31:12.103575  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.103584  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:12.103591  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:12.103650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:12.131971  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:12.131995  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:12.132000  306747 cri.go:89] found id: ""
	I1017 19:31:12.132008  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:12.132063  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.136064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.139529  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:12.139597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:12.165954  306747 cri.go:89] found id: ""
	I1017 19:31:12.165977  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.165985  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:12.165991  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:12.166049  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:12.195543  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:12.195568  306747 cri.go:89] found id: ""
	I1017 19:31:12.195577  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:12.195632  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.199531  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:12.199603  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:12.225881  306747 cri.go:89] found id: ""
	I1017 19:31:12.225911  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.225920  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:12.225929  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:12.225942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:12.259524  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:12.259552  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:12.333075  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:12.333112  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:12.363221  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:12.363249  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:12.467386  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:12.467420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:12.498049  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:12.498077  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:12.577701  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:12.577736  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:12.607614  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:12.607650  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:12.637568  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:12.637597  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:12.717020  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:12.717054  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:12.740140  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:12.740170  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:12.806245  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:12.796625   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.797249   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.799733   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.800324   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.802649   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:12.796625   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.797249   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.799733   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.800324   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.802649   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:15.306473  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:15.318959  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:15.319030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:15.345727  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:15.345823  306747 cri.go:89] found id: ""
	I1017 19:31:15.345847  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:15.345935  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.349860  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:15.349937  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:15.382414  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:15.382437  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:15.382442  306747 cri.go:89] found id: ""
	I1017 19:31:15.382463  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:15.382539  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.386718  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.390470  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:15.390578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:15.417577  306747 cri.go:89] found id: ""
	I1017 19:31:15.417652  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.417668  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:15.417676  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:15.417743  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:15.445163  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:15.445206  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:15.445211  306747 cri.go:89] found id: ""
	I1017 19:31:15.445220  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:15.445305  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.450196  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.453988  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:15.454058  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:15.479623  306747 cri.go:89] found id: ""
	I1017 19:31:15.479647  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.479655  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:15.479662  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:15.479725  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:15.505913  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:15.505936  306747 cri.go:89] found id: ""
	I1017 19:31:15.505953  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:15.506007  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.509808  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:15.509881  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:15.535383  306747 cri.go:89] found id: ""
	I1017 19:31:15.535408  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.535418  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:15.535428  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:15.535440  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:15.561245  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:15.561272  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:15.622736  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:15.622771  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:15.660115  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:15.660150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:15.758501  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:15.758536  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:15.778239  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:15.778273  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:15.857887  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:15.842831   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.843942   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.845164   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.846077   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.848805   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:15.842831   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.843942   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.845164   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.846077   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.848805   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:15.857910  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:15.857926  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:15.946523  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:15.946560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:15.980219  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:15.980245  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:16.013998  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:16.014027  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:16.095391  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:16.095426  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:18.629382  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:18.642985  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:18.643054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:18.669511  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:18.669532  306747 cri.go:89] found id: ""
	I1017 19:31:18.669541  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:18.669601  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.673633  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:18.673707  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:18.702215  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:18.702239  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:18.702244  306747 cri.go:89] found id: ""
	I1017 19:31:18.702252  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:18.702331  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.709379  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.717482  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:18.717554  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:18.744246  306747 cri.go:89] found id: ""
	I1017 19:31:18.744269  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.744277  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:18.744283  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:18.744337  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:18.770169  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:18.770192  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:18.770197  306747 cri.go:89] found id: ""
	I1017 19:31:18.770205  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:18.770271  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.774060  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.777555  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:18.777624  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:18.804459  306747 cri.go:89] found id: ""
	I1017 19:31:18.804485  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.804494  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:18.804500  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:18.804582  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:18.831698  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:18.831721  306747 cri.go:89] found id: ""
	I1017 19:31:18.831730  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:18.831783  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.837132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:18.837273  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:18.870956  306747 cri.go:89] found id: ""
	I1017 19:31:18.870983  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.870992  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:18.871001  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:18.871012  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:18.986913  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:18.986950  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:19.007461  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:19.007493  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:19.035000  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:19.035029  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:19.116120  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:19.116154  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:19.146274  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:19.146303  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:19.226087  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:19.226126  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:19.274249  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:19.274285  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:19.342797  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:19.333272   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.333919   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.335774   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.336320   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.338756   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:19.333272   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.333919   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.335774   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.336320   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.338756   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:19.342824  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:19.342837  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:19.405167  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:19.405241  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:19.437359  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:19.437389  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:21.966216  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:21.977051  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:21.977124  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:22.010370  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:22.010393  306747 cri.go:89] found id: ""
	I1017 19:31:22.010401  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:22.010463  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.014786  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:22.014905  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:22.054881  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:22.054905  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:22.054910  306747 cri.go:89] found id: ""
	I1017 19:31:22.054917  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:22.054974  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.058919  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.062725  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:22.062801  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:22.092827  306747 cri.go:89] found id: ""
	I1017 19:31:22.092910  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.092926  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:22.092935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:22.093011  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:22.120574  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:22.120597  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:22.120602  306747 cri.go:89] found id: ""
	I1017 19:31:22.120609  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:22.120665  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.124579  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.128240  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:22.128314  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:22.155355  306747 cri.go:89] found id: ""
	I1017 19:31:22.155382  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.155392  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:22.155398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:22.155457  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:22.182686  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:22.182750  306747 cri.go:89] found id: ""
	I1017 19:31:22.182771  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:22.182857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.186655  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:22.186754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:22.211995  306747 cri.go:89] found id: ""
	I1017 19:31:22.212020  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.212029  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:22.212038  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:22.212080  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:22.310483  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:22.310518  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:22.376696  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:22.367517   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.368315   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370151   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370790   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.372572   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:22.367517   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.368315   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370151   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370790   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.372572   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:22.376758  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:22.376778  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:22.406493  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:22.406521  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:22.425071  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:22.425110  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:22.454385  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:22.454416  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:22.516625  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:22.516662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:22.551521  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:22.551555  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:22.645961  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:22.645999  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:22.676665  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:22.676691  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:22.757888  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:22.758011  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:25.307695  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:25.318532  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:25.318666  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:25.351844  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:25.351866  306747 cri.go:89] found id: ""
	I1017 19:31:25.351873  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:25.351936  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.355571  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:25.355637  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:25.382616  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:25.382640  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:25.382646  306747 cri.go:89] found id: ""
	I1017 19:31:25.382664  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:25.382717  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.386649  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.390174  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:25.390311  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:25.417606  306747 cri.go:89] found id: ""
	I1017 19:31:25.417630  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.417639  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:25.417645  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:25.417706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:25.445452  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:25.445475  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:25.445480  306747 cri.go:89] found id: ""
	I1017 19:31:25.445487  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:25.445541  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.449471  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.452872  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:25.452956  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:25.480615  306747 cri.go:89] found id: ""
	I1017 19:31:25.480648  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.480658  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:25.480664  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:25.480732  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:25.507575  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:25.507595  306747 cri.go:89] found id: ""
	I1017 19:31:25.507603  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:25.507669  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.512130  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:25.512199  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:25.539371  306747 cri.go:89] found id: ""
	I1017 19:31:25.539441  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.539463  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:25.539488  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:25.539527  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:25.619877  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:25.619914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:25.638042  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:25.638071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:25.677301  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:25.677335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:25.768647  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:25.768682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:25.808421  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:25.808456  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:25.833684  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:25.833709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:25.930177  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:25.930222  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:25.981992  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:25.982022  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:26.087083  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:26.087123  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:26.158486  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:26.150658   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.151278   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.152877   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.153291   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.154745   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:26.150658   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.151278   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.152877   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.153291   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.154745   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:26.158506  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:26.158519  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:28.685675  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:28.697159  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:28.697228  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:28.724197  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:28.724223  306747 cri.go:89] found id: ""
	I1017 19:31:28.724231  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:28.724294  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.728163  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:28.728249  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:28.755375  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:28.755400  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:28.755405  306747 cri.go:89] found id: ""
	I1017 19:31:28.755413  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:28.755465  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.759475  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.762827  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:28.762901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:28.788123  306747 cri.go:89] found id: ""
	I1017 19:31:28.788150  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.788159  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:28.788165  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:28.788221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:28.818579  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:28.818611  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:28.818617  306747 cri.go:89] found id: ""
	I1017 19:31:28.818624  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:28.818677  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.822375  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.825827  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:28.825901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:28.856344  306747 cri.go:89] found id: ""
	I1017 19:31:28.856371  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.856379  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:28.856386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:28.856456  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:28.883877  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:28.883901  306747 cri.go:89] found id: ""
	I1017 19:31:28.883909  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:28.883969  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.890405  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:28.890482  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:28.919970  306747 cri.go:89] found id: ""
	I1017 19:31:28.919997  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.920007  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:28.920016  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:28.920028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:28.938590  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:28.938619  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:29.012463  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:29.012502  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:29.051714  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:29.051751  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:29.139864  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:29.139904  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:29.167130  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:29.167157  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:29.244122  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:29.244163  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:29.289243  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:29.289271  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:29.365219  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:29.356772   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.357390   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.358919   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.359407   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.360893   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:29.356772   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.357390   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.358919   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.359407   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.360893   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:29.365246  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:29.365260  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:29.391983  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:29.392013  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:29.418030  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:29.418136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:32.016682  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:32.027928  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:32.028056  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:32.057743  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:32.057770  306747 cri.go:89] found id: ""
	I1017 19:31:32.057779  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:32.057832  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.062215  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:32.062350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:32.096282  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:32.096359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:32.096379  306747 cri.go:89] found id: ""
	I1017 19:31:32.096402  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:32.096490  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.100272  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.104020  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:32.104094  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:32.130658  306747 cri.go:89] found id: ""
	I1017 19:31:32.130684  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.130692  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:32.130698  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:32.130785  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:32.158436  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:32.158459  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:32.158464  306747 cri.go:89] found id: ""
	I1017 19:31:32.158472  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:32.158524  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.162501  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.165977  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:32.166093  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:32.192337  306747 cri.go:89] found id: ""
	I1017 19:31:32.192414  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.192438  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:32.192460  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:32.192566  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:32.224591  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:32.224625  306747 cri.go:89] found id: ""
	I1017 19:31:32.224643  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:32.224699  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.228992  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:32.229114  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:32.263902  306747 cri.go:89] found id: ""
	I1017 19:31:32.263936  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.263945  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:32.263954  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:32.263970  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:32.331346  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:32.321358   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.322175   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325150   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325743   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.327508   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:32.321358   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.322175   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325150   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325743   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.327508   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:32.331370  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:32.331383  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:32.358344  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:32.358372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:32.419310  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:32.419347  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:32.462060  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:32.462091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:32.543672  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:32.543709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:32.572300  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:32.572327  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:32.650752  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:32.650785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:32.687208  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:32.687239  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:32.785332  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:32.785370  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:32.804237  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:32.804272  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:35.336200  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:35.351300  306747 out.go:203] 
	W1017 19:31:35.354294  306747 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1017 19:31:35.354331  306747 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1017 19:31:35.354341  306747 out.go:285] * Related issues:
	W1017 19:31:35.354355  306747 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1017 19:31:35.354368  306747 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1017 19:31:35.357325  306747 out.go:203] 
	
	
	==> CRI-O <==
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.336555027Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.33658308Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.339801184Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.339831682Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.953037254Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=202e1d64-912a-476c-ba5a-77b37bc42979 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.953839727Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6205eb3f-5cb1-4748-8710-0ffe69b4490c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.955014194Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-254035/kube-controller-manager" id=081f7878-c585-4466-b2db-1bae5c6893ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.955225536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.961488794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.962588933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.983518924Z" level=info msg="Created container 09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4: kube-system/kube-controller-manager-ha-254035/kube-controller-manager" id=081f7878-c585-4466-b2db-1bae5c6893ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.984251327Z" level=info msg="Starting container: 09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4" id=0d55a9d8-f1b5-40f1-8bd6-984aab4be84b name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.987082086Z" level=info msg="Started container" PID=1467 containerID=09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4 description=kube-system/kube-controller-manager-ha-254035/kube-controller-manager id=0d55a9d8-f1b5-40f1-8bd6-984aab4be84b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee9f2d44d32377576c274975d42c83c6d10327b8cf9c78d24d11e2f783796a0e
	Oct 17 19:26:29 ha-254035 conmon[1199]: conmon f662d4e90719bc39bd00 <ninfo>: container 1202 exited with status 1
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.433901954Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f8df12f8-0980-4df8-b1a9-6ee17b7f8ffd name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.435915053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ba31ec85-e31e-4fc3-9dcf-e12b08bd6e71 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.441058833Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f9d9837c-aba3-4e03-853d-b95f80acea4f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.441479975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.45712493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.457473179Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fdd046ea9be9a16a63c03510b49257ec82013029fd6bc07010444052d640f8f0/merged/etc/passwd: no such file or directory"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.457519947Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fdd046ea9be9a16a63c03510b49257ec82013029fd6bc07010444052d640f8f0/merged/etc/group: no such file or directory"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.457904732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.498042086Z" level=info msg="Created container faca00e9a381032f2a2a1ca361d6f8261cbb527f61722910f84bf86e69627f22: kube-system/storage-provisioner/storage-provisioner" id=f9d9837c-aba3-4e03-853d-b95f80acea4f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.499778687Z" level=info msg="Starting container: faca00e9a381032f2a2a1ca361d6f8261cbb527f61722910f84bf86e69627f22" id=14304d27-6de8-4811-9a66-8c4d47f3188f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.503194694Z" level=info msg="Started container" PID=1483 containerID=faca00e9a381032f2a2a1ca361d6f8261cbb527f61722910f84bf86e69627f22 description=kube-system/storage-provisioner/storage-provisioner id=14304d27-6de8-4811-9a66-8c4d47f3188f name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2cae7d5aa8d4e785124a213f6c2cc39a98e7313513ec9ea001c05e6360e2f93
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	faca00e9a3810       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       2                   c2cae7d5aa8d4       storage-provisioner                 kube-system
	09b363cd1ecad       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   5                   ee9f2d44d3237       kube-controller-manager-ha-254035   kube-system
	576cfa798259d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               1                   70bac1a7c5264       kindnet-gzzsg                       kube-system
	9ee89513ed12a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   9b9434e716ce6       coredns-66bc5c9577-wbgc8            kube-system
	758a5862ad867       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   be0fe8edcd6ba       busybox-7b57f96db7-nc6x2            default
	c52f3d12f85be       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                1                   e47d5acf8c94c       kube-proxy-548b2                    kube-system
	f662d4e90719b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   c2cae7d5aa8d4       storage-provisioner                 kube-system
	8edb27c8d6015       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   269b656ae24bb       coredns-66bc5c9577-gfklr            kube-system
	8f2e18695e457       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   4                   ee9f2d44d3237       kube-controller-manager-ha-254035   kube-system
	26c8280f98ef8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            2                   5952fd9040500       kube-apiserver-ha-254035            kube-system
	a9f69dd8228df       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            1                   9e4e211817dbb       kube-scheduler-ha-254035            kube-system
	2dc181e1d75c1       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  0                   75776cf83b5c8       kube-vip-ha-254035                  kube-system
	99ffff8c4838d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      1                   d1536a316aa1d       etcd-ha-254035                      kube-system
	b745cb636fe8e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   7 minutes ago       Exited              kube-apiserver            1                   5952fd9040500       kube-apiserver-ha-254035            kube-system
	
	
	==> coredns [8edb27c8d6015a43dc1b4fd9d8f695495a303a3c83de005f1197b1c1420e5d7e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58119 - 23158 "HINFO IN 703179826096282682.4600017575089700098. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025326139s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [9ee89513ed12a83eea9b477aadcc58ed9f5e2d62a017cd43bad27b1118f04b45] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59051 - 49005 "HINFO IN 2456025369292059622.4845573965486641381. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018045022s
	
	
	==> describe nodes <==
	Name:               ha-254035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_17_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:31:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:18:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-254035
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                eadb5c5f-dcbb-485c-aea7-3aa5b951fd9e
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-nc6x2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-gfklr             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-wbgc8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-254035                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-gzzsg                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-254035             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-254035    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-548b2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-254035             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-254035                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m44s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-254035 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           8m32s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientMemory  7m52s (x8 over 7m53s)  kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m52s (x8 over 7m53s)  kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m52s (x8 over 7m53s)  kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m13s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	
	
	Name:               ha-254035-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_18_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:18:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:23:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-254035-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6c5e97e0-fa27-407d-a976-b646e8a40ca5
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6xjlp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-254035-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-vss98                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-254035-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-254035-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-b4fr6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-254035-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-254035-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 8m28s                 kube-proxy       
	  Normal   Starting                 12m                   kube-proxy       
	  Normal   RegisteredNode           13m                   node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           12m                   node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           11m                   node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   Starting                 9m10s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m10s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m9s (x8 over 9m10s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     9m9s (x8 over 9m10s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m9s (x8 over 9m10s)  kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             8m37s                 node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           8m32s                 node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           5m13s                 node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   NodeNotReady             4m23s                 node-controller  Node ha-254035-m02 status is now: NodeNotReady
	
	
	Name:               ha-254035-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_20_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:19:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:23:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-254035-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2f343c58-0cc9-444a-bc88-7799c3ff52df
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-979zm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-254035-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-2k9kj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-254035-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-254035-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-k56cv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-254035-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-254035-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        11m    kube-proxy       
	  Normal  RegisteredNode  11m    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  8m32s  node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  5m13s  node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  NodeNotReady    4m23s  node-controller  Node ha-254035-m03 status is now: NodeNotReady
	
	
	Name:               ha-254035-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_21_16_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:21:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:22:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-254035-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                12691412-a8b5-426e-846e-d6161e527ea6
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pwhwv       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-fr5ts    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeReady                9m47s              kubelet          Node ha-254035-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m32s              node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           5m13s              node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeNotReady             4m23s              node-controller  Node ha-254035-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct17 18:30] overlayfs: idmapped layers are currently not supported
	[Oct17 18:31] overlayfs: idmapped layers are currently not supported
	[  +9.357480] overlayfs: idmapped layers are currently not supported
	[Oct17 18:33] overlayfs: idmapped layers are currently not supported
	[  +5.779853] overlayfs: idmapped layers are currently not supported
	[Oct17 18:34] overlayfs: idmapped layers are currently not supported
	[Oct17 18:35] overlayfs: idmapped layers are currently not supported
	[Oct17 18:36] overlayfs: idmapped layers are currently not supported
	[ +20.850590] overlayfs: idmapped layers are currently not supported
	[Oct17 18:38] overlayfs: idmapped layers are currently not supported
	[ +19.812679] overlayfs: idmapped layers are currently not supported
	[Oct17 18:39] overlayfs: idmapped layers are currently not supported
	[ +19.225178] overlayfs: idmapped layers are currently not supported
	[Oct17 18:40] overlayfs: idmapped layers are currently not supported
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 18:57] overlayfs: idmapped layers are currently not supported
	[Oct17 19:03] overlayfs: idmapped layers are currently not supported
	[Oct17 19:04] overlayfs: idmapped layers are currently not supported
	[Oct17 19:17] overlayfs: idmapped layers are currently not supported
	[Oct17 19:18] overlayfs: idmapped layers are currently not supported
	[Oct17 19:19] overlayfs: idmapped layers are currently not supported
	[Oct17 19:21] overlayfs: idmapped layers are currently not supported
	[Oct17 19:22] overlayfs: idmapped layers are currently not supported
	[Oct17 19:23] overlayfs: idmapped layers are currently not supported
	[  +4.119232] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [99ffff8c4838d302fd86aa2def104fc0bc5a061a4b4b00a66b6659be26e84f94] <==
	{"level":"warn","ts":"2025-10-17T19:31:43.882708Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:43.977687Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:43.982826Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:43.985341Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:43.987418Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:43.988433Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:43.994133Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.001986Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.011313Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.016640Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.019914Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.023901Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.031748Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.040270Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.044737Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.047776Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.051623Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.060020Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.070593Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.075373Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.082411Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.087357Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.091921Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.099850Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:44.108115Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:31:44 up  2:14,  0 user,  load average: 1.29, 1.25, 1.26
	Linux ha-254035 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [576cfa798259d8160ac05728f7d414a328778671800ac5aa4b4d45bfd6b32ca7] <==
	I1017 19:31:12.316884       1 main.go:301] handling current node
	I1017 19:31:22.316591       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:31:22.316724       1 main.go:301] handling current node
	I1017 19:31:22.316765       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:31:22.316799       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:31:22.316958       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:31:22.316999       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:31:22.317085       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:31:22.317118       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:31:32.318786       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:31:32.318883       1 main.go:301] handling current node
	I1017 19:31:32.318923       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:31:32.318956       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:31:32.319124       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:31:32.319162       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:31:32.319267       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:31:32.319300       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:31:42.312669       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:31:42.312715       1 main.go:301] handling current node
	I1017 19:31:42.312734       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:31:42.312741       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:31:42.312914       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:31:42.312921       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:31:42.312977       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:31:42.312984       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [26c8280f98ef8d0b35d3d3f933f908e0be045364d9887ae7338e14fc4e4385e4] <==
	I1017 19:25:41.080327       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:25:41.096711       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 19:25:41.096824       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:25:41.097844       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:25:41.175963       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:25:41.240687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:25:41.270984       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1017 19:25:41.278063       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1017 19:25:41.280292       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:25:41.288893       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:25:41.289028       1 policy_source.go:240] refreshing policies
	I1017 19:25:41.289185       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:25:41.331450       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:25:41.383818       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:25:41.406733       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1017 19:25:41.413308       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1017 19:25:45.477912       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 19:25:45.579324       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:25:45.579417       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1017 19:25:46.424106       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1017 19:25:47.046652       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1017 19:26:06.426319       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1017 19:27:22.125956       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:27:22.236976       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:27:22.377213       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [b745cb636fe8e12797dbad3808d1af04aa579d4fbd2ba8ac91052e88e1d9594d] <==
	{"level":"warn","ts":"2025-10-17T19:24:55.662540Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662541Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662657Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662764Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fad20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662902Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fad20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663035Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400253bc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663152Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400253bc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663213Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663271Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40011003c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663383Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.664911Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665014Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665142Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665183Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026141e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665234Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026141e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665283Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002615680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665351Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002b00960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665456Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027650e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662006Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014c32c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:25:01.465860Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1017 19:25:01.465976       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1017 19:25:01.466227       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-254035?timeout=10s" auditID="46bb9fa1-62e8-45b2-afdf-459f2b875119"
	E1017 19:25:01.466249       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.626µs" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-254035" result=null
	F1017 19:25:02.365194       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-10-17T19:25:02.527979Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	
	
	==> kube-controller-manager [09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4] <==
	I1017 19:26:31.955042       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:26:31.955150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:26:31.955182       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:26:31.960320       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 19:26:31.964011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:26:31.973631       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:26:31.974067       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:26:31.974279       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:26:31.974994       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:26:31.975207       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:26:31.975822       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:26:31.976008       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 19:26:31.976066       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:26:31.976280       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:26:31.977778       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:26:31.982328       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:26:31.982451       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	I1017 19:26:31.985705       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:31.985877       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 19:26:31.996213       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:26:31.999311       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:32.005595       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:26:32.011326       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:26:32.011373       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:27:22.463777       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	
	
	==> kube-controller-manager [8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a] <==
	I1017 19:25:26.963428       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:25:27.847264       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 19:25:27.847300       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:25:27.848875       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 19:25:27.849078       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1017 19:25:27.849285       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 19:25:27.849330       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 19:25:37.867683       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [c52f3d12f85be9ad9f0f95f3255def1ee473db156fc0776fb80fa92aad03d8c3] <==
	I1017 19:25:59.103590       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:25:59.177968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:25:59.279067       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:25:59.279103       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:25:59.279223       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:25:59.297489       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:25:59.297617       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:25:59.301231       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:25:59.301529       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:25:59.301552       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:25:59.305385       1 config.go:200] "Starting service config controller"
	I1017 19:25:59.305486       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:25:59.305654       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:25:59.305943       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:25:59.306000       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:25:59.306196       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:25:59.307366       1 config.go:309] "Starting node config controller"
	I1017 19:25:59.311349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:25:59.311421       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:25:59.405715       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:25:59.406183       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:25:59.406288       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a9f69dd8228df806b3caf0a6a77814b3035f6624474afd789ff17d36b93becbb] <==
	E1017 19:24:43.700780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:24:44.750268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:24:46.554973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:24:47.376765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 19:24:47.902102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:25:06.878063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:25:07.212761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:25:12.280794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:25:12.456185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:25:13.739609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:25:14.975535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:25:16.328928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:25:18.380682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:25:20.375603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:25:21.123675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:25:21.517709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:25:21.932068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:25:22.080795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:25:22.270841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:25:25.020718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:25:25.490826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:25:28.981572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:25:29.683639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 19:25:35.763654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1017 19:26:13.713049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.312257     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-gfklr_kube-system(8bf2b43b-91c9-4531-a571-36060412860e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.312386     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-gfklr" podUID="8bf2b43b-91c9-4531-a571-36060412860e"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.317109     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.317252     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.319138     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.319272     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:25:47 ha-254035 kubelet[795]: I1017 19:25:47.321488     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.321734     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.322802     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-548b2_kube-system(4b772887-90df-4871-9343-69349bdda859): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.322858     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-548b2" podUID="4b772887-90df-4871-9343-69349bdda859"
	Oct 17 19:25:47 ha-254035 kubelet[795]: I1017 19:25:47.952228     795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f120554cc7e7eb74e29c79f31815613" path="/var/lib/kubelet/pods/4f120554cc7e7eb74e29c79f31815613/volumes"
	Oct 17 19:25:48 ha-254035 kubelet[795]: I1017 19:25:48.323043     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:25:48 ha-254035 kubelet[795]: E1017 19:25:48.323207     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.831559     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f\": container with ID starting with 03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f not found: ID does not exist" containerID="03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f"
	Oct 17 19:25:51 ha-254035 kubelet[795]: I1017 19:25:51.831609     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f" err="rpc error: code = NotFound desc = could not find container \"03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f\": container with ID starting with 03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f not found: ID does not exist"
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.832065     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f\": container with ID starting with 37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f not found: ID does not exist" containerID="37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f"
	Oct 17 19:25:51 ha-254035 kubelet[795]: I1017 19:25:51.832099     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f" err="rpc error: code = NotFound desc = could not find container \"37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f\": container with ID starting with 37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f not found: ID does not exist"
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.918773     795 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a4e6e217ea695149c5a154bbecbc7798ca28f6ae40caa311c266f47def107466/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a4e6e217ea695149c5a154bbecbc7798ca28f6ae40caa311c266f47def107466/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/3.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/3.log: no such file or directory
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.921773     795 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/880b7d2432f854b1d2e4221c38cbcfa637187b519d26b99deb22f9bb126c2b9f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/880b7d2432f854b1d2e4221c38cbcfa637187b519d26b99deb22f9bb126c2b9f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/2.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/2.log: no such file or directory
	Oct 17 19:25:59 ha-254035 kubelet[795]: I1017 19:25:59.951449     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:25:59 ha-254035 kubelet[795]: E1017 19:25:59.951658     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:26:14 ha-254035 kubelet[795]: I1017 19:26:14.950613     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:26:14 ha-254035 kubelet[795]: E1017 19:26:14.950806     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:26:27 ha-254035 kubelet[795]: I1017 19:26:27.952669     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:26:29 ha-254035 kubelet[795]: I1017 19:26:29.433310     795 scope.go:117] "RemoveContainer" containerID="f662d4e90719bc39bd008b62c1cbb5dd8676a08edeef61897f3e68749b418ff7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-254035 -n ha-254035
helpers_test.go:269: (dbg) Run:  kubectl --context ha-254035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (5.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (4.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-254035" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-254035\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-254035\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-254035\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvid
ia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizat
ions\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-254035
helpers_test.go:243: (dbg) docker inspect ha-254035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	        "Created": "2025-10-17T19:17:36.603472481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306876,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:23:44.340324163Z",
	            "FinishedAt": "2025-10-17T19:23:43.760876929Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hosts",
	        "LogPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8-json.log",
	        "Name": "/ha-254035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-254035:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-254035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	                "LowerDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-254035",
	                "Source": "/var/lib/docker/volumes/ha-254035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-254035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-254035",
	                "name.minikube.sigs.k8s.io": "ha-254035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d0adb3a8a6f2813284c8f1a167175cc89dcd4664a3ffc878d2459fa2b4bea6d1",
	            "SandboxKey": "/var/run/docker/netns/d0adb3a8a6f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-254035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f1:6c:59:90:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f667d9c3ea201faa6573d33bffc4907012785051d424eb86a31b1e09eb8b135",
	                    "EndpointID": "daecfb65c2dbfda1e321a7412bf642ac1f3e72c152f9f670fa4c977e6a8f5b74",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-254035",
	                        "7f770318d5dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-254035 -n ha-254035
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 logs -n 25
E1017 19:31:48.231419  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 logs -n 25: (2.191791282s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m02 sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m02.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m03:/home/docker/cp-test.txt ha-254035-m04:/home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp testdata/cp-test.txt ha-254035-m04:/home/docker/cp-test.txt                                                             │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035-m04.txt │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035:/home/docker/cp-test_ha-254035-m04_ha-254035.txt                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035.txt                                                 │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m02 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m03:/home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node start m02 --alsologtostderr -v 5                                                                                      │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:23 UTC │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │ 17 Oct 25 19:23 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5                                                                                   │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	│ node    │ ha-254035 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:23:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:23:44.078300  306747 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:23:44.078421  306747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:23:44.078432  306747 out.go:374] Setting ErrFile to fd 2...
	I1017 19:23:44.078438  306747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:23:44.078707  306747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:23:44.079081  306747 out.go:368] Setting JSON to false
	I1017 19:23:44.079937  306747 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7575,"bootTime":1760721449,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:23:44.080008  306747 start.go:141] virtualization:  
	I1017 19:23:44.083220  306747 out.go:179] * [ha-254035] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:23:44.087049  306747 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:23:44.087156  306747 notify.go:220] Checking for updates...
	I1017 19:23:44.093223  306747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:23:44.096040  306747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:44.098900  306747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:23:44.101720  306747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:23:44.104684  306747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:23:44.108337  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:44.108506  306747 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:23:44.135326  306747 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:23:44.135444  306747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:23:44.192131  306747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:23:44.183230595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:23:44.192236  306747 docker.go:318] overlay module found
	I1017 19:23:44.195310  306747 out.go:179] * Using the docker driver based on existing profile
	I1017 19:23:44.198085  306747 start.go:305] selected driver: docker
	I1017 19:23:44.198103  306747 start.go:925] validating driver "docker" against &{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:44.198244  306747 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:23:44.198355  306747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:23:44.253333  306747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:23:44.243935529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:23:44.253792  306747 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:23:44.253819  306747 cni.go:84] Creating CNI manager for ""
	I1017 19:23:44.253877  306747 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:23:44.253928  306747 start.go:349] cluster config:
	{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:44.258934  306747 out.go:179] * Starting "ha-254035" primary control-plane node in "ha-254035" cluster
	I1017 19:23:44.261731  306747 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:23:44.264643  306747 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:23:44.267316  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:44.267375  306747 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 19:23:44.267392  306747 cache.go:58] Caching tarball of preloaded images
	I1017 19:23:44.267402  306747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:23:44.267494  306747 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:23:44.267505  306747 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:23:44.267648  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:44.287307  306747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:23:44.287328  306747 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:23:44.287345  306747 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:23:44.287367  306747 start.go:360] acquireMachinesLock for ha-254035: {Name:mka2e39989b9cf6078778e7f6519885462ea711f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:23:44.287430  306747 start.go:364] duration metric: took 44.061µs to acquireMachinesLock for "ha-254035"
	I1017 19:23:44.287455  306747 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:23:44.287461  306747 fix.go:54] fixHost starting: 
	I1017 19:23:44.287734  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:23:44.304208  306747 fix.go:112] recreateIfNeeded on ha-254035: state=Stopped err=<nil>
	W1017 19:23:44.304236  306747 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:23:44.307544  306747 out.go:252] * Restarting existing docker container for "ha-254035" ...
	I1017 19:23:44.307642  306747 cli_runner.go:164] Run: docker start ha-254035
	I1017 19:23:44.557261  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:23:44.582382  306747 kic.go:430] container "ha-254035" state is running.
	I1017 19:23:44.582813  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:44.609625  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:44.609882  306747 machine.go:93] provisionDockerMachine start ...
	I1017 19:23:44.609944  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:44.630467  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:44.634045  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:44.634070  306747 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:23:44.634815  306747 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:23:47.792030  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:23:47.792065  306747 ubuntu.go:182] provisioning hostname "ha-254035"
	I1017 19:23:47.792127  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:47.809622  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:47.809936  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:47.809952  306747 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035 && echo "ha-254035" | sudo tee /etc/hostname
	I1017 19:23:47.965159  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:23:47.965243  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:47.983936  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:47.984247  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:47.984262  306747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:23:48.140890  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:23:48.140965  306747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:23:48.140998  306747 ubuntu.go:190] setting up certificates
	I1017 19:23:48.141008  306747 provision.go:84] configureAuth start
	I1017 19:23:48.141069  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:48.158600  306747 provision.go:143] copyHostCerts
	I1017 19:23:48.158645  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:48.158680  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:23:48.158692  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:48.158773  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:23:48.158860  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:48.158883  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:23:48.158892  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:48.158921  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:23:48.158969  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:48.158990  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:23:48.158998  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:48.159024  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:23:48.159076  306747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035 san=[127.0.0.1 192.168.49.2 ha-254035 localhost minikube]
	I1017 19:23:49.196726  306747 provision.go:177] copyRemoteCerts
	I1017 19:23:49.196790  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:23:49.196831  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.213909  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.316345  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:23:49.316405  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:23:49.333689  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:23:49.333750  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1017 19:23:49.350869  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:23:49.350938  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:23:49.369234  306747 provision.go:87] duration metric: took 1.228212253s to configureAuth
	I1017 19:23:49.369303  306747 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:23:49.369552  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:49.369665  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.386704  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:49.387020  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33174 <nil> <nil>}
	I1017 19:23:49.387042  306747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:23:49.707607  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:23:49.707692  306747 machine.go:96] duration metric: took 5.097783711s to provisionDockerMachine
	I1017 19:23:49.707720  306747 start.go:293] postStartSetup for "ha-254035" (driver="docker")
	I1017 19:23:49.707762  306747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:23:49.707871  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:23:49.707943  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.732798  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.836574  306747 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:23:49.839984  306747 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:23:49.840010  306747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:23:49.840021  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:23:49.840085  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:23:49.840181  306747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:23:49.840196  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:23:49.840298  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:23:49.847846  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:49.865445  306747 start.go:296] duration metric: took 157.679358ms for postStartSetup
	I1017 19:23:49.865569  306747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:23:49.865624  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:49.889188  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:49.989662  306747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:23:49.994825  306747 fix.go:56] duration metric: took 5.707355296s for fixHost
	I1017 19:23:49.994852  306747 start.go:83] releasing machines lock for "ha-254035", held for 5.707408965s
	I1017 19:23:49.994927  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:23:50.015297  306747 ssh_runner.go:195] Run: cat /version.json
	I1017 19:23:50.015360  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:50.015301  306747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:23:50.015521  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:23:50.036378  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:50.050179  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:23:50.238257  306747 ssh_runner.go:195] Run: systemctl --version
	I1017 19:23:50.244735  306747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:23:50.281650  306747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:23:50.286151  306747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:23:50.286279  306747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:23:50.294085  306747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:23:50.294116  306747 start.go:495] detecting cgroup driver to use...
	I1017 19:23:50.294156  306747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:23:50.294238  306747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:23:50.309600  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:23:50.322860  306747 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:23:50.322932  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:23:50.338234  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:23:50.351355  306747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:23:50.467572  306747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:23:50.583217  306747 docker.go:234] disabling docker service ...
	I1017 19:23:50.583338  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:23:50.598924  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:23:50.611975  306747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:23:50.724286  306747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:23:50.847044  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:23:50.859364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:23:50.873503  306747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:23:50.873573  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.882985  306747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:23:50.883056  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.892747  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.902591  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.911060  306747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:23:50.919007  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.928031  306747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.936934  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:23:50.945620  306747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:23:50.953208  306747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:23:50.960459  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:23:51.085184  306747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:23:51.215570  306747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:23:51.215643  306747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:23:51.219416  306747 start.go:563] Will wait 60s for crictl version
	I1017 19:23:51.219481  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:23:51.222932  306747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:23:51.247803  306747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:23:51.247951  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:23:51.276815  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:23:51.309138  306747 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:23:51.311805  306747 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:23:51.327519  306747 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:23:51.331666  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:23:51.341689  306747 kubeadm.go:883] updating cluster {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:23:51.341851  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:51.341916  306747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:23:51.379317  306747 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:23:51.379341  306747 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:23:51.379396  306747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:23:51.405884  306747 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:23:51.405906  306747 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:23:51.405918  306747 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:23:51.406057  306747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:23:51.406155  306747 ssh_runner.go:195] Run: crio config
	I1017 19:23:51.475467  306747 cni.go:84] Creating CNI manager for ""
	I1017 19:23:51.475497  306747 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:23:51.475520  306747 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:23:51.475544  306747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-254035 NodeName:ha-254035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:23:51.475670  306747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-254035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:23:51.475693  306747 kube-vip.go:115] generating kube-vip config ...
	I1017 19:23:51.475756  306747 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:23:51.487989  306747 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:23:51.488119  306747 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:23:51.488198  306747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:23:51.496044  306747 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:23:51.496117  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 19:23:51.503891  306747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 19:23:51.517028  306747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:23:51.530699  306747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 19:23:51.544563  306747 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:23:51.557994  306747 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:23:51.561600  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:23:51.571313  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:23:51.690597  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:23:51.707379  306747 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.2
	I1017 19:23:51.707451  306747 certs.go:195] generating shared ca certs ...
	I1017 19:23:51.707483  306747 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:51.707678  306747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:23:51.707765  306747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:23:51.707807  306747 certs.go:257] generating profile certs ...
	I1017 19:23:51.707925  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:23:51.707978  306747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea
	I1017 19:23:51.708011  306747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1017 19:23:52.143690  306747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea ...
	I1017 19:23:52.143724  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea: {Name:mk84072e95c642d9de97a7b2d7684c1b2411f2c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:52.143929  306747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea ...
	I1017 19:23:52.143944  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea: {Name:mk1e13a21ca5f9f77c2e8e2d4f37d2c902696b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:52.144031  306747 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt.96820cea -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt
	I1017 19:23:52.144173  306747 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key
	I1017 19:23:52.144307  306747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:23:52.144326  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:23:52.144342  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:23:52.144362  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:23:52.144377  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:23:52.144396  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:23:52.144419  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:23:52.144435  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:23:52.144450  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:23:52.144501  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:23:52.144555  306747 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:23:52.144570  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:23:52.144594  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:23:52.144621  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:23:52.144646  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:23:52.144696  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:52.144726  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.144744  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.144760  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.145349  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:23:52.164836  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:23:52.182173  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:23:52.200320  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:23:52.220031  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:23:52.239993  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:23:52.259787  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:23:52.278396  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:23:52.296286  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:23:52.313979  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:23:52.331810  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:23:52.349798  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:23:52.364237  306747 ssh_runner.go:195] Run: openssl version
	I1017 19:23:52.376391  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:23:52.385410  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.389746  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.389837  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:23:52.434948  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:23:52.443397  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:23:52.452268  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.460529  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.460626  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:23:52.518909  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:23:52.528730  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:23:52.541129  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.545573  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.545658  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:23:52.629233  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:23:52.650967  306747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:23:52.657469  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:23:52.741430  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:23:52.801484  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:23:52.855613  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:23:52.911294  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:23:52.960715  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:23:53.023389  306747 kubeadm.go:400] StartCluster: {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:23:53.023526  306747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:23:53.023593  306747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:23:53.070982  306747 cri.go:89] found id: "a9f69dd8228df806b3caf0a6a77814b3035f6624474afd789ff17d36b93becbb"
	I1017 19:23:53.071006  306747 cri.go:89] found id: "2dc181e1d75c199e1d878c25f6b4eb381f5134e5e8ff6ed9deea02322d7cdf4c"
	I1017 19:23:53.071011  306747 cri.go:89] found id: "6fb4bcbcf5815899f9ed7e0ee3f40ae912c24131eda2482a13e66f3bf9211953"
	I1017 19:23:53.071015  306747 cri.go:89] found id: "99ffff8c4838d302fd86aa2def104fc0bc5a061a4b4b00a66b6659be26e84f94"
	I1017 19:23:53.071018  306747 cri.go:89] found id: "b745cb636fe8e12797dbad3808d1af04aa579d4fbd2ba8ac91052e88e1d9594d"
	I1017 19:23:53.071022  306747 cri.go:89] found id: ""
	I1017 19:23:53.071070  306747 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:23:53.085921  306747 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:23:53Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:23:53.085995  306747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:23:53.099392  306747 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:23:53.099418  306747 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:23:53.099471  306747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:23:53.118282  306747 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:23:53.118709  306747 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-254035" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:53.118820  306747 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-254035" cluster setting kubeconfig missing "ha-254035" context setting]
	I1017 19:23:53.119084  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.119598  306747 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:23:53.120104  306747 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 19:23:53.120124  306747 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 19:23:53.120130  306747 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 19:23:53.120135  306747 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 19:23:53.120142  306747 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 19:23:53.120434  306747 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 19:23:53.120753  306747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:23:53.137306  306747 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 19:23:53.137333  306747 kubeadm.go:601] duration metric: took 37.90723ms to restartPrimaryControlPlane
	I1017 19:23:53.137344  306747 kubeadm.go:402] duration metric: took 113.964982ms to StartCluster
	I1017 19:23:53.137360  306747 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.137421  306747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:23:53.137983  306747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:23:53.138193  306747 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:23:53.138219  306747 start.go:241] waiting for startup goroutines ...
	I1017 19:23:53.138228  306747 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:23:53.138643  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:53.142436  306747 out.go:179] * Enabled addons: 
	I1017 19:23:53.145409  306747 addons.go:514] duration metric: took 7.175068ms for enable addons: enabled=[]
	I1017 19:23:53.145452  306747 start.go:246] waiting for cluster config update ...
	I1017 19:23:53.145461  306747 start.go:255] writing updated cluster config ...
	I1017 19:23:53.148803  306747 out.go:203] 
	I1017 19:23:53.151893  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:53.152042  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.155214  306747 out.go:179] * Starting "ha-254035-m02" control-plane node in "ha-254035" cluster
	I1017 19:23:53.158764  306747 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:23:53.161709  306747 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:23:53.164610  306747 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:23:53.164638  306747 cache.go:58] Caching tarball of preloaded images
	I1017 19:23:53.164743  306747 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:23:53.164758  306747 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:23:53.164894  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.165099  306747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:23:53.194887  306747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:23:53.194907  306747 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:23:53.194919  306747 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:23:53.194954  306747 start.go:360] acquireMachinesLock for ha-254035-m02: {Name:mkcf59557cfb2c18712510006a9b88f53e9d8916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:23:53.195003  306747 start.go:364] duration metric: took 34.034µs to acquireMachinesLock for "ha-254035-m02"
	I1017 19:23:53.195021  306747 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:23:53.195027  306747 fix.go:54] fixHost starting: m02
	I1017 19:23:53.195286  306747 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:23:53.230172  306747 fix.go:112] recreateIfNeeded on ha-254035-m02: state=Stopped err=<nil>
	W1017 19:23:53.230198  306747 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:23:53.233425  306747 out.go:252] * Restarting existing docker container for "ha-254035-m02" ...
	I1017 19:23:53.233506  306747 cli_runner.go:164] Run: docker start ha-254035-m02
	I1017 19:23:53.677194  306747 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:23:53.705353  306747 kic.go:430] container "ha-254035-m02" state is running.
	I1017 19:23:53.705741  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:23:53.741365  306747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:23:53.741612  306747 machine.go:93] provisionDockerMachine start ...
	I1017 19:23:53.741677  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:53.774362  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:53.774683  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:53.774700  306747 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:23:53.776617  306747 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32782->127.0.0.1:33179: read: connection reset by peer
	I1017 19:23:57.101345  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:23:57.101367  306747 ubuntu.go:182] provisioning hostname "ha-254035-m02"
	I1017 19:23:57.101452  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:57.129925  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:57.130248  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:57.130260  306747 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m02 && echo "ha-254035-m02" | sudo tee /etc/hostname
	I1017 19:23:57.485252  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:23:57.485332  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:57.518218  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:57.518523  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:57.518547  306747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:23:57.769807  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:23:57.769837  306747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:23:57.769852  306747 ubuntu.go:190] setting up certificates
	I1017 19:23:57.769861  306747 provision.go:84] configureAuth start
	I1017 19:23:57.769925  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:23:57.808507  306747 provision.go:143] copyHostCerts
	I1017 19:23:57.808576  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:57.808611  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:23:57.808621  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:23:57.808702  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:23:57.808777  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:57.808795  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:23:57.808799  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:23:57.808824  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:23:57.808885  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:57.808900  306747 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:23:57.808904  306747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:23:57.808927  306747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:23:57.808973  306747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m02 san=[127.0.0.1 192.168.49.3 ha-254035-m02 localhost minikube]
	I1017 19:23:58.970392  306747 provision.go:177] copyRemoteCerts
	I1017 19:23:58.970466  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:23:58.970517  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:58.988411  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:23:59.109264  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:23:59.109327  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:23:59.143927  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:23:59.144007  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:23:59.175735  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:23:59.175798  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 19:23:59.207513  306747 provision.go:87] duration metric: took 1.437637997s to configureAuth
	I1017 19:23:59.207541  306747 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:23:59.207787  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:23:59.207891  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.254211  306747 main.go:141] libmachine: Using SSH client type: native
	I1017 19:23:59.254534  306747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33179 <nil> <nil>}
	I1017 19:23:59.254554  306747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:23:59.802396  306747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:23:59.802506  306747 machine.go:96] duration metric: took 6.06086173s to provisionDockerMachine
	I1017 19:23:59.802537  306747 start.go:293] postStartSetup for "ha-254035-m02" (driver="docker")
	I1017 19:23:59.802584  306747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:23:59.802692  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:23:59.802768  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.826274  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:23:59.933472  306747 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:23:59.937860  306747 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:23:59.937890  306747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:23:59.937902  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:23:59.937957  306747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:23:59.938045  306747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:23:59.938058  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:23:59.938173  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:23:59.946632  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:23:59.974586  306747 start.go:296] duration metric: took 172.005858ms for postStartSetup
	I1017 19:23:59.974693  306747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:23:59.974736  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:23:59.998482  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:00.178671  306747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:24:00.215855  306747 fix.go:56] duration metric: took 7.020817171s for fixHost
	I1017 19:24:00.215889  306747 start.go:83] releasing machines lock for "ha-254035-m02", held for 7.020877911s
	I1017 19:24:00.215976  306747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:24:00.366887  306747 out.go:179] * Found network options:
	I1017 19:24:00.370345  306747 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 19:24:00.373400  306747 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:24:00.373520  306747 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:24:00.373638  306747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:24:00.373712  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:24:00.373921  306747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:24:00.373955  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:24:00.473797  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:00.502501  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33179 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:24:01.163570  306747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:24:01.201188  306747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:24:01.201285  306747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:24:01.221545  306747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:24:01.221578  306747 start.go:495] detecting cgroup driver to use...
	I1017 19:24:01.221624  306747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:24:01.221679  306747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:24:01.249432  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:24:01.274115  306747 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:24:01.274197  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:24:01.300156  306747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:24:01.327634  306747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:24:01.676293  306747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:24:01.963473  306747 docker.go:234] disabling docker service ...
	I1017 19:24:01.963548  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:24:01.985469  306747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:24:02.006761  306747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:24:02.326335  306747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:24:02.689696  306747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:24:02.707153  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:24:02.733380  306747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:24:02.733503  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.745270  306747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:24:02.745354  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.761212  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.777017  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.786654  306747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:24:02.797775  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.809053  306747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.819042  306747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:24:02.830450  306747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:24:02.839137  306747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:24:02.853061  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:24:03.081615  306747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:25:33.444575  306747 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.36287356s)
	I1017 19:25:33.444601  306747 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:25:33.444663  306747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:25:33.448790  306747 start.go:563] Will wait 60s for crictl version
	I1017 19:25:33.448855  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:25:33.452484  306747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:25:33.483181  306747 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:25:33.483261  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:25:33.520275  306747 ssh_runner.go:195] Run: crio --version
	I1017 19:25:33.555708  306747 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:25:33.558710  306747 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:25:33.561569  306747 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:25:33.577269  306747 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:25:33.581166  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:25:33.590512  306747 mustload.go:65] Loading cluster: ha-254035
	I1017 19:25:33.590749  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:25:33.591003  306747 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:25:33.607631  306747 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:25:33.607910  306747 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.3
	I1017 19:25:33.607918  306747 certs.go:195] generating shared ca certs ...
	I1017 19:25:33.607932  306747 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:25:33.608031  306747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:25:33.608069  306747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:25:33.608076  306747 certs.go:257] generating profile certs ...
	I1017 19:25:33.608151  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:25:33.608210  306747 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.5a836dc6
	I1017 19:25:33.608248  306747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:25:33.608256  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:25:33.608268  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:25:33.608278  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:25:33.608288  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:25:33.608298  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:25:33.608314  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:25:33.608325  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:25:33.608334  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:25:33.608382  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:25:33.608409  306747 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:25:33.608418  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:25:33.608439  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:25:33.608460  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:25:33.608482  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:25:33.608557  306747 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:25:33.608586  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:25:33.608606  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:33.608635  306747 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:25:33.608691  306747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:25:33.626221  306747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33174 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:25:33.720799  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:25:33.724641  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:25:33.732808  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:25:33.736200  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:25:33.744126  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:25:33.747465  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:25:33.755494  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:25:33.759075  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:25:33.767011  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:25:33.770516  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:25:33.778582  306747 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:25:33.781925  306747 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:25:33.789662  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:25:33.814144  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:25:33.834289  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:25:33.855264  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:25:33.875243  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:25:33.892238  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:25:33.909902  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:25:33.927819  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:25:33.945089  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:25:33.970864  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:25:33.990984  306747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:25:34.011449  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:25:34.027436  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:25:34.042890  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:25:34.058368  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:25:34.072057  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:25:34.088147  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:25:34.104554  306747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:25:34.119006  306747 ssh_runner.go:195] Run: openssl version
	I1017 19:25:34.125500  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:25:34.134066  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.138184  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.138272  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:25:34.179366  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:25:34.187225  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:25:34.195194  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.198812  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.198884  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:25:34.240748  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:25:34.248576  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:25:34.256442  306747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.260252  306747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.260343  306747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:25:34.301741  306747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:25:34.309494  306747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:25:34.313266  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:25:34.354021  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:25:34.403496  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:25:34.452995  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:25:34.501920  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:25:34.553096  306747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:25:34.605637  306747 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 19:25:34.605735  306747 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:25:34.605768  306747 kube-vip.go:115] generating kube-vip config ...
	I1017 19:25:34.605818  306747 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:25:34.618260  306747 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:25:34.618384  306747 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:25:34.618473  306747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:25:34.626096  306747 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:25:34.626222  306747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:25:34.634241  306747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:25:34.648042  306747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:25:34.661462  306747 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:25:34.676617  306747 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:25:34.680227  306747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:25:34.690889  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:25:34.816737  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:25:34.831088  306747 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:25:34.831560  306747 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:25:34.834934  306747 out.go:179] * Verifying Kubernetes components...
	I1017 19:25:34.837819  306747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:25:34.968993  306747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:25:34.983274  306747 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:25:34.983348  306747 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:25:34.983632  306747 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:25:40.996755  306747 node_ready.go:49] node "ha-254035-m02" is "Ready"
	I1017 19:25:40.996789  306747 node_ready.go:38] duration metric: took 6.013138239s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:25:40.996811  306747 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:25:40.996889  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:41.497684  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:41.997836  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:42.497138  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:42.997736  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:43.497602  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:43.997356  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:44.497754  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:44.997290  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:45.497281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:45.997333  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:46.497704  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:46.997128  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:47.497723  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:47.997671  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:48.497561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:48.997733  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:49.497782  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:49.997750  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:50.497774  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:50.997177  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:51.497562  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:51.997821  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:52.497764  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:52.997863  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:53.497099  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:53.997052  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:54.497663  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:54.997664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:55.497701  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:55.997019  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:56.497726  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:56.997168  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:57.497752  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:57.997835  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:58.497010  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:58.997743  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:59.497316  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:25:59.997012  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:00.497061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:00.997884  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:01.497722  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:01.997039  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:02.497739  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:02.997315  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:03.497590  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:03.997754  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:04.497035  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:04.997744  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:05.497624  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:05.997419  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:06.497061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:06.997596  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:07.497373  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:07.997733  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:08.497364  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:08.997732  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:09.497421  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:09.997728  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:10.497717  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:10.996987  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:11.497090  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:11.996943  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:12.497429  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:12.997010  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:13.496953  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:13.997093  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:14.497074  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:14.997281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:15.497737  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:15.997688  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:16.497625  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:16.997704  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:17.497320  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:17.996949  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:18.497953  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:18.997042  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:19.497090  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:19.997041  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:20.497518  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:20.997019  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:21.497012  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:21.996982  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:22.497045  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:22.997657  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:23.497467  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:23.997803  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:24.497044  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:24.997325  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:25.497747  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:25.997044  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:26.497026  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:26.997552  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:27.497036  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:27.997604  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:28.497701  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:28.997373  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:29.497563  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:29.997697  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:30.497017  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:30.997407  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:31.497716  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:31.997874  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:32.497096  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:32.997561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:33.497057  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:33.997665  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:34.497043  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:34.997691  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:34.997800  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:35.032363  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:35.032386  306747 cri.go:89] found id: ""
	I1017 19:26:35.032399  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:35.032460  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.036381  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:35.036459  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:35.065338  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:35.065359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:35.065364  306747 cri.go:89] found id: ""
	I1017 19:26:35.065371  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:35.065425  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.069065  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.072703  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:35.072774  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:35.103898  306747 cri.go:89] found id: ""
	I1017 19:26:35.103925  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.103934  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:35.103941  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:35.104009  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:35.133147  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:35.133171  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:35.133176  306747 cri.go:89] found id: ""
	I1017 19:26:35.133189  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:35.133243  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.137074  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.140598  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:35.140672  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:35.172805  306747 cri.go:89] found id: ""
	I1017 19:26:35.172831  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.172840  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:35.172847  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:35.172921  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:35.200314  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:35.200339  306747 cri.go:89] found id: ""
	I1017 19:26:35.200347  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:35.200399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:35.204068  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:35.204142  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:35.229333  306747 cri.go:89] found id: ""
	I1017 19:26:35.229355  306747 logs.go:282] 0 containers: []
	W1017 19:26:35.229364  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:35.229373  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:35.229386  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:35.270788  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:35.270824  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:35.327408  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:35.327441  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:35.407924  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:35.407963  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:35.511553  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:35.511590  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:35.532712  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:35.532742  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:35.560601  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:35.560631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:35.605951  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:35.605984  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:35.637220  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:35.637251  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:35.667818  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:35.667848  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:35.697952  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:35.697980  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:36.107033  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:36.098521    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.099526    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.100351    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.101907    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.102306    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:36.098521    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.099526    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.100351    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.101907    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:36.102306    1541 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:38.608691  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:38.620441  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:38.620597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:38.653949  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:38.653982  306747 cri.go:89] found id: ""
	I1017 19:26:38.653991  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:38.654045  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.657661  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:38.657779  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:38.682961  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:38.682992  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:38.682998  306747 cri.go:89] found id: ""
	I1017 19:26:38.683005  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:38.683057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.686897  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.690246  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:38.690316  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:38.727058  306747 cri.go:89] found id: ""
	I1017 19:26:38.727088  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.727096  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:38.727102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:38.727159  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:38.751866  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:38.751891  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:38.751895  306747 cri.go:89] found id: ""
	I1017 19:26:38.751902  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:38.751960  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.755561  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.758764  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:38.758835  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:38.791573  306747 cri.go:89] found id: ""
	I1017 19:26:38.791597  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.791607  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:38.791613  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:38.791672  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:38.818970  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:38.818993  306747 cri.go:89] found id: ""
	I1017 19:26:38.819002  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:38.819054  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:38.822644  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:38.822766  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:38.849350  306747 cri.go:89] found id: ""
	I1017 19:26:38.849373  306747 logs.go:282] 0 containers: []
	W1017 19:26:38.849381  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:38.849390  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:38.849436  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:38.883482  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:38.883512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:38.978629  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:38.978664  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:39.055121  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:39.045881    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.046283    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.047962    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.048507    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.050096    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:39.045881    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.046283    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.047962    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.048507    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:39.050096    1624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:39.055145  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:39.055158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:39.081488  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:39.081516  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:39.123529  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:39.123560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:39.152993  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:39.153024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:39.181581  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:39.181608  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:39.199086  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:39.199116  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:39.231605  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:39.231638  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:39.287509  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:39.287544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:41.868969  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:41.879522  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:41.879591  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:41.906366  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:41.906388  306747 cri.go:89] found id: ""
	I1017 19:26:41.906397  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:41.906450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.909979  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:41.910090  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:41.940072  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:41.940101  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:41.940105  306747 cri.go:89] found id: ""
	I1017 19:26:41.940113  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:41.940173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.945194  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:41.948667  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:41.948784  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:41.979374  306747 cri.go:89] found id: ""
	I1017 19:26:41.979410  306747 logs.go:282] 0 containers: []
	W1017 19:26:41.979419  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:41.979425  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:41.979492  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:42.008367  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:42.008445  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:42.008465  306747 cri.go:89] found id: ""
	I1017 19:26:42.008493  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:42.008628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.016467  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.031735  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:42.031876  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:42.079629  306747 cri.go:89] found id: ""
	I1017 19:26:42.079665  306747 logs.go:282] 0 containers: []
	W1017 19:26:42.079676  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:42.079684  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:42.079750  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:42.122316  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:42.122342  306747 cri.go:89] found id: ""
	I1017 19:26:42.122351  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:42.122423  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:42.131137  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:42.131241  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:42.200222  306747 cri.go:89] found id: ""
	I1017 19:26:42.200249  306747 logs.go:282] 0 containers: []
	W1017 19:26:42.200259  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:42.200270  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:42.200283  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:42.314817  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:42.314908  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:42.375712  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:42.375762  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:42.431602  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:42.431639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:42.465004  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:42.465097  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:42.491256  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:42.491284  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:42.567094  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:42.558455    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.559104    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.560757    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.561472    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.563142    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:42.558455    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.559104    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.560757    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.561472    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:42.563142    1782 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:42.567120  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:42.567134  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:42.597513  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:42.597543  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:42.632231  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:42.632268  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:42.659445  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:42.659478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:42.686189  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:42.686217  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:45.285116  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:45.308457  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:45.308578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:45.374050  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:45.374075  306747 cri.go:89] found id: ""
	I1017 19:26:45.374083  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:45.374152  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.386847  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:45.387031  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:45.432081  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:45.432105  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:45.432111  306747 cri.go:89] found id: ""
	I1017 19:26:45.432129  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:45.432185  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.436568  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.443473  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:45.443575  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:45.473992  306747 cri.go:89] found id: ""
	I1017 19:26:45.474066  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.474095  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:45.474124  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:45.474279  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:45.508735  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:45.508808  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:45.508820  306747 cri.go:89] found id: ""
	I1017 19:26:45.508829  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:45.508889  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.513024  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.517047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:45.517124  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:45.544672  306747 cri.go:89] found id: ""
	I1017 19:26:45.544698  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.544707  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:45.544714  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:45.544814  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:45.577228  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:45.577250  306747 cri.go:89] found id: ""
	I1017 19:26:45.577257  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:45.577316  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:45.581280  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:45.581379  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:45.608143  306747 cri.go:89] found id: ""
	I1017 19:26:45.608166  306747 logs.go:282] 0 containers: []
	W1017 19:26:45.608174  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:45.608183  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:45.608226  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:45.627200  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:45.627230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:45.699692  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:45.692149    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.692814    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694339    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694730    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.696164    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:45.692149    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.692814    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694339    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.694730    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:45.696164    1894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:45.699717  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:45.699732  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:45.725239  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:45.725269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:45.766316  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:45.766359  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:45.831866  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:45.831908  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:45.869708  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:45.869736  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:45.910170  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:45.910198  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:46.010455  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:46.010498  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:46.047523  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:46.047559  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:46.076222  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:46.076306  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:48.663425  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:48.673865  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:48.673931  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:48.699244  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:48.699267  306747 cri.go:89] found id: ""
	I1017 19:26:48.699275  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:48.699330  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.702918  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:48.702988  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:48.729193  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:48.729268  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:48.729288  306747 cri.go:89] found id: ""
	I1017 19:26:48.729311  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:48.729390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.732927  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.736821  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:48.736893  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:48.763745  306747 cri.go:89] found id: ""
	I1017 19:26:48.763770  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.763780  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:48.763786  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:48.763842  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:48.790384  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:48.790407  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:48.790413  306747 cri.go:89] found id: ""
	I1017 19:26:48.790420  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:48.790496  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.796703  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.800342  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:48.800409  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:48.825802  306747 cri.go:89] found id: ""
	I1017 19:26:48.825830  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.825839  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:48.825846  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:48.825904  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:48.863208  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:48.863231  306747 cri.go:89] found id: ""
	I1017 19:26:48.863239  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:48.863294  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:48.866822  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:48.866902  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:48.896937  306747 cri.go:89] found id: ""
	I1017 19:26:48.897017  306747 logs.go:282] 0 containers: []
	W1017 19:26:48.897039  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:48.897080  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:48.897109  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:48.999995  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:49.000071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:49.019541  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:49.019629  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:49.045737  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:49.045806  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:49.106443  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:49.106478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:49.135555  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:49.135583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:49.162643  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:49.162670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:49.240999  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:49.241038  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:49.311820  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:49.304505    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.305101    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.306817    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.307292    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.308350    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:49.304505    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.305101    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.306817    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.307292    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:49.308350    2062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:49.311849  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:49.311861  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:49.347575  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:49.347614  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:49.399291  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:49.399328  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:51.931612  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:51.944600  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:51.944667  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:51.977717  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:51.977741  306747 cri.go:89] found id: ""
	I1017 19:26:51.977750  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:51.977808  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:51.981757  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:51.981877  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:52.013943  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:52.013965  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:52.013971  306747 cri.go:89] found id: ""
	I1017 19:26:52.013979  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:52.014034  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.017876  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.021450  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:52.021529  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:52.054762  306747 cri.go:89] found id: ""
	I1017 19:26:52.054788  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.054797  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:52.054804  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:52.054873  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:52.094469  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:52.094492  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:52.094498  306747 cri.go:89] found id: ""
	I1017 19:26:52.094506  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:52.094561  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.099707  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.103487  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:52.103557  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:52.137366  306747 cri.go:89] found id: ""
	I1017 19:26:52.137393  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.137403  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:52.137410  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:52.137494  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:52.164118  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:52.164142  306747 cri.go:89] found id: ""
	I1017 19:26:52.164151  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:52.164235  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:52.167871  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:52.167951  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:52.195587  306747 cri.go:89] found id: ""
	I1017 19:26:52.195667  306747 logs.go:282] 0 containers: []
	W1017 19:26:52.195691  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:52.195730  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:52.195759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:52.214865  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:52.214895  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:52.252677  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:52.252718  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:52.306241  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:52.306281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:52.362956  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:52.362991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:52.391628  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:52.391659  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:52.471864  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:52.463115    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464242    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464958    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.465978    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.466515    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:52.463115    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464242    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.464958    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.465978    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:52.466515    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:52.471900  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:52.471915  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:52.518448  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:52.518483  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:52.552877  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:52.552904  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:52.635208  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:52.635241  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:52.671244  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:52.671274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:55.270940  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:55.282002  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:55.282081  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:55.307829  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:55.307853  306747 cri.go:89] found id: ""
	I1017 19:26:55.307862  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:55.307917  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.311717  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:55.311788  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:55.337747  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:55.337770  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:55.337775  306747 cri.go:89] found id: ""
	I1017 19:26:55.337783  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:55.337840  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.341583  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.345443  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:55.345519  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:55.374240  306747 cri.go:89] found id: ""
	I1017 19:26:55.374268  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.374277  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:55.374283  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:55.374348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:55.400969  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:55.400994  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:55.400999  306747 cri.go:89] found id: ""
	I1017 19:26:55.401007  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:55.401074  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.405683  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.409216  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:55.409288  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:55.436866  306747 cri.go:89] found id: ""
	I1017 19:26:55.436897  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.436907  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:55.436913  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:55.436972  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:55.469071  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:55.469094  306747 cri.go:89] found id: ""
	I1017 19:26:55.469103  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:55.469160  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:55.472979  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:55.473075  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:55.504006  306747 cri.go:89] found id: ""
	I1017 19:26:55.504033  306747 logs.go:282] 0 containers: []
	W1017 19:26:55.504043  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:55.504052  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:55.504064  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:55.530026  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:55.530065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:55.566251  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:55.566281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:55.619544  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:55.619580  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:55.647120  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:55.647155  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:55.674483  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:55.674552  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:55.771290  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:55.771328  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:55.791108  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:55.791139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:55.877444  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:55.868298    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.869608    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.870496    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.871568    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.873502    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:55.868298    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.869608    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.870496    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.871568    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:55.873502    2345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:55.877467  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:55.877481  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:55.942292  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:55.942327  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:26:56.029233  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:56.029279  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:58.564639  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:26:58.575251  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:26:58.575327  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:26:58.603745  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:58.603769  306747 cri.go:89] found id: ""
	I1017 19:26:58.603778  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:26:58.603841  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.607600  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:26:58.607673  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:26:58.635364  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:58.635387  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:58.635393  306747 cri.go:89] found id: ""
	I1017 19:26:58.635401  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:26:58.635459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.639164  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.642599  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:26:58.642665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:26:58.671065  306747 cri.go:89] found id: ""
	I1017 19:26:58.671089  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.671098  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:26:58.671105  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:26:58.671161  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:26:58.697581  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:58.697606  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:58.697613  306747 cri.go:89] found id: ""
	I1017 19:26:58.697621  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:26:58.697701  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.701636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.705721  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:26:58.705790  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:26:58.739521  306747 cri.go:89] found id: ""
	I1017 19:26:58.739548  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.739557  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:26:58.739563  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:26:58.739618  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:26:58.766994  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:58.767022  306747 cri.go:89] found id: ""
	I1017 19:26:58.767030  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:26:58.767085  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:26:58.771181  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:26:58.771253  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:26:58.798835  306747 cri.go:89] found id: ""
	I1017 19:26:58.798862  306747 logs.go:282] 0 containers: []
	W1017 19:26:58.798871  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:26:58.798880  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:26:58.798891  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:26:58.841984  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:26:58.842010  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:26:58.866669  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:26:58.866697  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:26:58.916756  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:26:58.916789  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:26:58.980015  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:26:58.980050  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:26:59.009380  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:26:59.009409  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:26:59.109257  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:26:59.109295  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:26:59.177549  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:26:59.168803    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.169600    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.171537    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.172076    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.173678    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:26:59.168803    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.169600    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.171537    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.172076    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:26:59.173678    2476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:26:59.177581  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:26:59.177599  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:26:59.206699  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:26:59.206727  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:26:59.242107  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:26:59.242142  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:26:59.275450  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:26:59.275479  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:01.857354  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:01.869639  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:01.869705  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:01.902744  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:01.902764  306747 cri.go:89] found id: ""
	I1017 19:27:01.902772  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:01.902838  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.906810  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:01.906935  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:01.934659  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:01.934722  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:01.934742  306747 cri.go:89] found id: ""
	I1017 19:27:01.934766  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:01.934853  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.938762  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:01.946146  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:01.946267  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:01.980395  306747 cri.go:89] found id: ""
	I1017 19:27:01.980461  306747 logs.go:282] 0 containers: []
	W1017 19:27:01.980482  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:01.980505  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:01.980614  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:02.015273  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:02.015298  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:02.015303  306747 cri.go:89] found id: ""
	I1017 19:27:02.015320  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:02.015383  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.019407  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.023456  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:02.023534  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:02.051152  306747 cri.go:89] found id: ""
	I1017 19:27:02.051182  306747 logs.go:282] 0 containers: []
	W1017 19:27:02.051192  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:02.051198  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:02.051258  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:02.080723  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:02.080745  306747 cri.go:89] found id: ""
	I1017 19:27:02.080753  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:02.080813  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:02.084603  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:02.084678  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:02.120072  306747 cri.go:89] found id: ""
	I1017 19:27:02.120146  306747 logs.go:282] 0 containers: []
	W1017 19:27:02.120170  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:02.120195  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:02.120230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:02.139600  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:02.139631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:02.185131  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:02.185166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:02.229909  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:02.229940  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:02.260111  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:02.260140  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:02.288588  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:02.288618  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:02.370459  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:02.370495  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:02.476572  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:02.476608  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:02.551905  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:02.543576    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.544579    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546057    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546535    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.548140    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:02.543576    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.544579    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546057    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.546535    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:02.548140    2611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:02.551926  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:02.551940  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:02.578293  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:02.578321  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:02.633456  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:02.633493  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:05.164689  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:05.177240  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:05.177315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:05.205506  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:05.205530  306747 cri.go:89] found id: ""
	I1017 19:27:05.205540  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:05.205597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.209410  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:05.209492  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:05.236360  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:05.236383  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:05.236388  306747 cri.go:89] found id: ""
	I1017 19:27:05.236396  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:05.236448  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.240255  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.243840  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:05.243907  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:05.279749  306747 cri.go:89] found id: ""
	I1017 19:27:05.279788  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.279798  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:05.279804  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:05.279860  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:05.307767  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:05.307790  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:05.307796  306747 cri.go:89] found id: ""
	I1017 19:27:05.307803  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:05.307857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.311429  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.314827  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:05.314906  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:05.340148  306747 cri.go:89] found id: ""
	I1017 19:27:05.340175  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.340184  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:05.340190  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:05.340246  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:05.366040  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:05.366063  306747 cri.go:89] found id: ""
	I1017 19:27:05.366071  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:05.366145  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:05.369954  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:05.370054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:05.396415  306747 cri.go:89] found id: ""
	I1017 19:27:05.396439  306747 logs.go:282] 0 containers: []
	W1017 19:27:05.396448  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:05.396457  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:05.396468  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:05.491768  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:05.491804  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:05.510133  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:05.510179  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:05.588291  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:05.580157    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.580846    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.582570    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.583481    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.584634    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:05.580157    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.580846    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.582570    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.583481    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:05.584634    2714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:05.588313  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:05.588326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:05.616894  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:05.616921  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:05.660215  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:05.660252  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:05.715621  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:05.715657  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:05.744211  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:05.744240  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:05.777510  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:05.777544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:05.808038  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:05.808066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:05.885964  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:05.886000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:08.420171  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:08.431142  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:08.431221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:08.457528  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:08.457552  306747 cri.go:89] found id: ""
	I1017 19:27:08.457561  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:08.457616  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.461556  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:08.461665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:08.492016  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:08.492039  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:08.492044  306747 cri.go:89] found id: ""
	I1017 19:27:08.492052  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:08.492103  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.495761  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.500185  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:08.500282  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:08.526916  306747 cri.go:89] found id: ""
	I1017 19:27:08.526941  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.526950  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:08.526957  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:08.527014  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:08.556113  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:08.556134  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:08.556140  306747 cri.go:89] found id: ""
	I1017 19:27:08.556147  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:08.556214  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.560101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.564014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:08.564084  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:08.594033  306747 cri.go:89] found id: ""
	I1017 19:27:08.594056  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.594071  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:08.594079  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:08.594135  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:08.620047  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:08.620113  306747 cri.go:89] found id: ""
	I1017 19:27:08.620142  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:08.620221  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:08.624310  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:08.624418  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:08.649502  306747 cri.go:89] found id: ""
	I1017 19:27:08.649567  306747 logs.go:282] 0 containers: []
	W1017 19:27:08.649595  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:08.649623  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:08.649648  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:08.743803  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:08.743839  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:08.769242  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:08.769268  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:08.799565  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:08.799593  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:08.828556  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:08.828635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:08.846407  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:08.846438  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:08.930960  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:08.922375    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.923180    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925039    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925592    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.927335    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:08.922375    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.923180    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925039    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.925592    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:08.927335    2876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:08.930984  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:08.930996  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:08.989884  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:08.989918  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:09.029740  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:09.029776  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:09.088750  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:09.088784  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:09.174757  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:09.174791  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:11.706527  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:11.717507  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:11.717580  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:11.742517  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:11.742540  306747 cri.go:89] found id: ""
	I1017 19:27:11.742548  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:11.742628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.746473  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:11.746545  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:11.778260  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:11.778322  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:11.778341  306747 cri.go:89] found id: ""
	I1017 19:27:11.778364  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:11.778435  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.782026  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.785484  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:11.785543  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:11.816069  306747 cri.go:89] found id: ""
	I1017 19:27:11.816094  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.816103  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:11.816109  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:11.816175  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:11.841738  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:11.841812  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:11.841832  306747 cri.go:89] found id: ""
	I1017 19:27:11.841848  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:11.841921  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.845737  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.849826  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:11.849962  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:11.877696  306747 cri.go:89] found id: ""
	I1017 19:27:11.877760  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.877783  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:11.877806  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:11.877878  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:11.905454  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:11.905478  306747 cri.go:89] found id: ""
	I1017 19:27:11.905487  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:11.905551  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:11.909271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:11.909371  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:11.937354  306747 cri.go:89] found id: ""
	I1017 19:27:11.937378  306747 logs.go:282] 0 containers: []
	W1017 19:27:11.937388  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:11.937397  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:11.937408  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:11.964198  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:11.964227  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:12.047655  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:12.047711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:12.152282  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:12.152323  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:12.185576  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:12.185607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:12.216321  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:12.216350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:12.234007  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:12.234037  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:12.302472  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:12.293592    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.294322    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.296814    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.297401    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.299030    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:12.293592    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.294322    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.296814    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.297401    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:12.299030    3020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:12.302493  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:12.302508  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:12.361658  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:12.361692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:12.396422  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:12.396455  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:12.450643  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:12.450679  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:14.981141  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:14.992478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:14.992583  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:15.029616  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:15.029652  306747 cri.go:89] found id: ""
	I1017 19:27:15.029662  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:15.029733  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.034198  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:15.034280  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:15.067180  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:15.067204  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:15.067210  306747 cri.go:89] found id: ""
	I1017 19:27:15.067223  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:15.067278  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.071734  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.075202  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:15.075278  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:15.102244  306747 cri.go:89] found id: ""
	I1017 19:27:15.102269  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.102278  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:15.102285  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:15.102345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:15.130161  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:15.130189  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:15.130195  306747 cri.go:89] found id: ""
	I1017 19:27:15.130203  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:15.130258  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.134790  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.138971  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:15.139069  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:15.173861  306747 cri.go:89] found id: ""
	I1017 19:27:15.173886  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.173896  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:15.173903  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:15.173964  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:15.202641  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:15.202665  306747 cri.go:89] found id: ""
	I1017 19:27:15.202674  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:15.202732  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:15.206633  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:15.206702  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:15.234246  306747 cri.go:89] found id: ""
	I1017 19:27:15.234273  306747 logs.go:282] 0 containers: []
	W1017 19:27:15.234283  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:15.234294  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:15.234305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:15.315039  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:15.315073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:15.418425  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:15.418463  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:15.436291  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:15.436322  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:15.508060  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:15.500418    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.501026    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502514    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502986    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.504397    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:15.500418    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.501026    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502514    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.502986    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:15.504397    3130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:15.508127  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:15.508156  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:15.541312  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:15.541345  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:15.597746  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:15.597777  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:15.630514  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:15.630544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:15.662426  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:15.662454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:15.690843  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:15.690870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:15.737261  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:15.737305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.271724  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:18.282865  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:18.282933  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:18.310461  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:18.310530  306747 cri.go:89] found id: ""
	I1017 19:27:18.310545  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:18.310598  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.314206  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:18.314277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:18.343711  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:18.343736  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:18.343741  306747 cri.go:89] found id: ""
	I1017 19:27:18.343750  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:18.343827  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.347663  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.351287  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:18.351359  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:18.378302  306747 cri.go:89] found id: ""
	I1017 19:27:18.378329  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.378350  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:18.378356  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:18.378434  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:18.405852  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:18.405876  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.405881  306747 cri.go:89] found id: ""
	I1017 19:27:18.405889  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:18.405977  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.409609  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.413366  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:18.413434  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:18.438274  306747 cri.go:89] found id: ""
	I1017 19:27:18.438308  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.438332  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:18.438348  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:18.438428  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:18.465310  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:18.465379  306747 cri.go:89] found id: ""
	I1017 19:27:18.465394  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:18.465449  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:18.469114  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:18.469267  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:18.495209  306747 cri.go:89] found id: ""
	I1017 19:27:18.495236  306747 logs.go:282] 0 containers: []
	W1017 19:27:18.495245  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:18.495254  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:18.495269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:18.521513  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:18.521541  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:18.551762  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:18.551788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:18.647502  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:18.647539  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:18.665784  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:18.665815  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:18.718577  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:18.718624  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:18.777594  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:18.777628  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:18.807963  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:18.807989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:18.892875  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:18.892910  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:18.960765  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:18.951643    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.952944    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.953536    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955189    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955840    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:18.951643    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.952944    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.953536    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955189    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:18.955840    3313 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:18.960787  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:18.960801  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:18.988908  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:18.988936  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:21.525356  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:21.536317  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:21.536383  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:21.562005  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:21.562074  306747 cri.go:89] found id: ""
	I1017 19:27:21.562089  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:21.562148  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.565814  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:21.565899  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:21.593641  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:21.593662  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:21.593668  306747 cri.go:89] found id: ""
	I1017 19:27:21.593675  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:21.593728  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.597715  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.601210  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:21.601286  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:21.626313  306747 cri.go:89] found id: ""
	I1017 19:27:21.626339  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.626349  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:21.626355  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:21.626413  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:21.658772  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:21.658794  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:21.658800  306747 cri.go:89] found id: ""
	I1017 19:27:21.658807  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:21.658866  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.662812  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.666487  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:21.666561  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:21.698844  306747 cri.go:89] found id: ""
	I1017 19:27:21.698905  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.698927  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:21.698951  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:21.699030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:21.728779  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:21.728838  306747 cri.go:89] found id: ""
	I1017 19:27:21.728865  306747 logs.go:282] 1 containers: [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:21.728939  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:21.732581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:21.732691  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:21.758611  306747 cri.go:89] found id: ""
	I1017 19:27:21.758636  306747 logs.go:282] 0 containers: []
	W1017 19:27:21.758645  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:21.758655  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:21.758685  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:21.853910  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:21.853951  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:21.929259  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:21.920729    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.921839    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923480    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923794    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.925410    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:21.920729    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.921839    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923480    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.923794    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:21.925410    3400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:21.929281  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:21.929294  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:21.969445  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:21.969472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:22.060427  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:22.060560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:22.126121  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:22.126202  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:22.196425  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:22.196503  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:22.261955  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:22.262043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:22.285064  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:22.285159  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:22.339749  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:22.339827  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:22.385350  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:22.385427  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:24.966467  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:24.992294  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:24.992366  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:25.035727  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:25.035754  306747 cri.go:89] found id: ""
	I1017 19:27:25.035762  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:25.035847  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.040229  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:25.040304  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:25.088117  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:25.088145  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:25.088152  306747 cri.go:89] found id: ""
	I1017 19:27:25.088159  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:25.088215  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.092329  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.099299  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:25.099383  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:25.150822  306747 cri.go:89] found id: ""
	I1017 19:27:25.150858  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.150868  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:25.150878  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:25.150945  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:25.211825  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:25.211850  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:25.211855  306747 cri.go:89] found id: ""
	I1017 19:27:25.211863  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:25.211927  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.217398  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.221047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:25.221126  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:25.258850  306747 cri.go:89] found id: ""
	I1017 19:27:25.258885  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.258895  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:25.258904  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:25.258968  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:25.295477  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:25.295500  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:25.295512  306747 cri.go:89] found id: ""
	I1017 19:27:25.295520  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:25.295576  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.301386  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:25.305803  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:25.305873  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:25.334929  306747 cri.go:89] found id: ""
	I1017 19:27:25.334954  306747 logs.go:282] 0 containers: []
	W1017 19:27:25.334970  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:25.334986  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:25.335006  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:25.365373  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:25.365402  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:25.382590  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:25.382626  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:25.432469  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:25.432570  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:25.478525  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:25.478601  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:25.551480  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:25.551560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:25.583783  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:25.583858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:25.679255  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:25.679301  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:25.739090  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:25.739118  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:25.854982  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:25.855021  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:25.955288  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:25.946765    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.947610    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949285    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949589    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.951072    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:25.946765    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.947610    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949285    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.949589    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:25.951072    3607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:25.955307  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:25.955319  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:26.000458  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:26.000579  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:28.530525  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:28.542430  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:28.542500  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:28.570373  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:28.570394  306747 cri.go:89] found id: ""
	I1017 19:27:28.570402  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:28.570454  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.575832  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:28.575903  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:28.604287  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:28.604307  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:28.604313  306747 cri.go:89] found id: ""
	I1017 19:27:28.604320  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:28.604374  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.608248  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.612312  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:28.612380  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:28.638709  306747 cri.go:89] found id: ""
	I1017 19:27:28.638735  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.638743  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:28.638750  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:28.638807  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:28.665927  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:28.665951  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:28.665957  306747 cri.go:89] found id: ""
	I1017 19:27:28.665964  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:28.666022  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.669671  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.673220  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:28.673317  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:28.703161  306747 cri.go:89] found id: ""
	I1017 19:27:28.703188  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.703197  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:28.703204  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:28.703264  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:28.733314  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:28.733379  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:28.733389  306747 cri.go:89] found id: ""
	I1017 19:27:28.733397  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:28.733460  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.736998  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:28.740330  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:28.740444  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:28.765130  306747 cri.go:89] found id: ""
	I1017 19:27:28.765156  306747 logs.go:282] 0 containers: []
	W1017 19:27:28.765165  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:28.765174  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:28.765216  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:28.834887  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:28.826610    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.827402    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829127    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829428    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.830934    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:28.826610    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.827402    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829127    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.829428    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:28.830934    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:28.834910  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:28.834923  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:28.870142  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:28.870187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:28.912354  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:28.912388  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:28.968695  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:28.968728  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:29.009047  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:29.009078  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:29.036706  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:29.036734  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:29.120616  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:29.120654  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:29.153285  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:29.153313  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:29.250625  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:29.250664  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:29.271875  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:29.271907  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:29.321668  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:29.321703  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:31.848333  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:31.859324  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:31.859392  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:31.892308  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:31.892331  306747 cri.go:89] found id: ""
	I1017 19:27:31.892347  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:31.892401  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.896342  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:31.896433  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:31.924335  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:31.924359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:31.924364  306747 cri.go:89] found id: ""
	I1017 19:27:31.924371  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:31.924446  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.928119  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.931375  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:31.931444  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:31.961757  306747 cri.go:89] found id: ""
	I1017 19:27:31.961783  306747 logs.go:282] 0 containers: []
	W1017 19:27:31.961792  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:31.961800  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:31.961857  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:31.990900  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:31.990924  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:31.990929  306747 cri.go:89] found id: ""
	I1017 19:27:31.990937  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:31.990997  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.994670  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:31.998160  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:31.998292  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:32.030448  306747 cri.go:89] found id: ""
	I1017 19:27:32.030523  306747 logs.go:282] 0 containers: []
	W1017 19:27:32.030539  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:32.030548  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:32.030615  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:32.062242  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:32.062267  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:32.062272  306747 cri.go:89] found id: ""
	I1017 19:27:32.062280  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:32.062332  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:32.066062  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:32.069606  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:32.069682  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:32.102492  306747 cri.go:89] found id: ""
	I1017 19:27:32.102534  306747 logs.go:282] 0 containers: []
	W1017 19:27:32.102544  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:32.102553  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:32.102566  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:32.179017  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:32.170484    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.170960    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172496    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172884    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.174718    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:32.170484    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.170960    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172496    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.172884    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:32.174718    3843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:32.179037  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:32.179050  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:32.225447  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:32.225475  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:32.270526  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:32.270557  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:32.304149  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:32.304181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:32.330757  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:32.330837  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:32.410571  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:32.410610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:32.443417  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:32.443444  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:32.461860  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:32.461890  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:32.510037  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:32.510083  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:32.569278  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:32.569325  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:32.602243  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:32.602269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:35.200643  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:35.211574  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:35.211646  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:35.243134  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:35.243158  306747 cri.go:89] found id: ""
	I1017 19:27:35.243166  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:35.243222  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.247054  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:35.247144  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:35.276216  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:35.276237  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:35.276243  306747 cri.go:89] found id: ""
	I1017 19:27:35.276251  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:35.276304  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.280057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.284007  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:35.284080  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:35.310830  306747 cri.go:89] found id: ""
	I1017 19:27:35.310909  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.310932  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:35.310955  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:35.311062  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:35.354572  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:35.354597  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:35.354602  306747 cri.go:89] found id: ""
	I1017 19:27:35.354610  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:35.354666  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.358450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.361871  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:35.361942  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:35.389041  306747 cri.go:89] found id: ""
	I1017 19:27:35.389065  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.389073  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:35.389079  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:35.389137  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:35.415942  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:35.415967  306747 cri.go:89] found id: "dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:35.415972  306747 cri.go:89] found id: ""
	I1017 19:27:35.415980  306747 logs.go:282] 2 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f]
	I1017 19:27:35.416037  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.419700  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:35.423643  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:35.423765  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:35.450381  306747 cri.go:89] found id: ""
	I1017 19:27:35.450404  306747 logs.go:282] 0 containers: []
	W1017 19:27:35.450413  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:35.450422  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:35.450435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:35.478252  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:35.478280  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:35.522590  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:35.522623  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:35.578335  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:35.578372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:35.613061  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:35.613091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:35.638492  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:35.638520  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:35.722854  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:35.722891  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:35.757639  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:35.757672  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:35.863697  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:35.863735  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:35.940574  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:35.932704    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.933394    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935016    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935464    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.936965    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:35.932704    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.933394    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935016    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.935464    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:35.936965    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:35.940597  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:35.940610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:35.976992  306747 logs.go:123] Gathering logs for kube-controller-manager [dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f] ...
	I1017 19:27:35.977024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dcfd13539d4da30c8b5cafac8cb9256ffcbd4fcb849cc780394f0ced727e501f"
	I1017 19:27:36.004857  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:36.004894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:38.527370  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:38.538426  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:38.538499  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:38.564462  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:38.564484  306747 cri.go:89] found id: ""
	I1017 19:27:38.564504  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:38.564583  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.568393  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:38.568469  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:38.593756  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:38.593785  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:38.593790  306747 cri.go:89] found id: ""
	I1017 19:27:38.593797  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:38.593850  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.597636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.601069  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:38.601138  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:38.628357  306747 cri.go:89] found id: ""
	I1017 19:27:38.628382  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.628391  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:38.628398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:38.628455  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:38.653998  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:38.654020  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:38.654025  306747 cri.go:89] found id: ""
	I1017 19:27:38.654033  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:38.654092  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.658000  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.661429  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:38.661500  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:38.687831  306747 cri.go:89] found id: ""
	I1017 19:27:38.687857  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.687866  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:38.687873  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:38.687939  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:38.728871  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:38.728893  306747 cri.go:89] found id: ""
	I1017 19:27:38.728902  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:38.728956  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:38.732553  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:38.732626  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:38.758108  306747 cri.go:89] found id: ""
	I1017 19:27:38.758131  306747 logs.go:282] 0 containers: []
	W1017 19:27:38.758139  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:38.758149  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:38.758160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:38.856927  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:38.857005  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:38.875545  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:38.875575  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:38.948879  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:38.941082    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.941735    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943798    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.945334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:38.941082    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.941735    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.943798    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:38.945334    4141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:38.948901  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:38.948914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:38.997335  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:38.997372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:39.029015  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:39.029043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:39.108011  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:39.108046  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:39.141940  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:39.141971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:39.170446  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:39.170472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:39.208445  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:39.208481  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:39.272902  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:39.272952  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:41.807281  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:41.817677  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:41.817808  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:41.847030  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:41.847052  306747 cri.go:89] found id: ""
	I1017 19:27:41.847060  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:41.847141  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.856702  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:41.856768  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:41.882291  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:41.882314  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:41.882320  306747 cri.go:89] found id: ""
	I1017 19:27:41.882337  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:41.882441  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.886489  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.896574  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:41.896698  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:41.922724  306747 cri.go:89] found id: ""
	I1017 19:27:41.922748  306747 logs.go:282] 0 containers: []
	W1017 19:27:41.922757  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:41.922763  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:41.922817  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:41.948998  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:41.949024  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:41.949030  306747 cri.go:89] found id: ""
	I1017 19:27:41.949038  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:41.949090  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.961165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:41.965546  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:41.965617  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:41.994892  306747 cri.go:89] found id: ""
	I1017 19:27:41.994917  306747 logs.go:282] 0 containers: []
	W1017 19:27:41.994935  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:41.994943  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:41.995002  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:42.028588  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:42.028626  306747 cri.go:89] found id: ""
	I1017 19:27:42.028636  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:42.028712  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:42.035671  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:42.035764  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:42.067030  306747 cri.go:89] found id: ""
	I1017 19:27:42.067061  306747 logs.go:282] 0 containers: []
	W1017 19:27:42.067072  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:42.067081  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:42.067105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:42.109133  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:42.109175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:42.199861  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:42.199955  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:42.342289  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:42.342335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:42.363849  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:42.363906  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:42.441824  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:42.432639    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.433836    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.434718    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436054    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436745    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:42.432639    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.433836    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.434718    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436054    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:42.436745    4289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:42.441858  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:42.441872  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:42.471376  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:42.471404  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:42.516923  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:42.516960  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:42.595252  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:42.595288  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:42.623727  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:42.623757  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:42.665018  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:42.665048  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:45.203111  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:45.228005  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:45.228167  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:45.284064  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:45.284089  306747 cri.go:89] found id: ""
	I1017 19:27:45.284098  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:45.284165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.293975  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:45.294167  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:45.366214  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:45.366372  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:45.366394  306747 cri.go:89] found id: ""
	I1017 19:27:45.366421  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:45.366520  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.385006  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.397052  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:45.397258  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:45.444612  306747 cri.go:89] found id: ""
	I1017 19:27:45.444689  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.444712  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:45.444737  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:45.444839  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:45.475398  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:45.475418  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:45.475422  306747 cri.go:89] found id: ""
	I1017 19:27:45.475430  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:45.475483  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.480459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.484700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:45.484826  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:45.516264  306747 cri.go:89] found id: ""
	I1017 19:27:45.516289  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.516298  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:45.516305  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:45.516385  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:45.545867  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:45.545891  306747 cri.go:89] found id: ""
	I1017 19:27:45.545900  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:45.545955  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:45.549781  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:45.549898  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:45.578811  306747 cri.go:89] found id: ""
	I1017 19:27:45.578837  306747 logs.go:282] 0 containers: []
	W1017 19:27:45.578847  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:45.578857  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:45.578870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:45.605475  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:45.605507  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:45.687039  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:45.687081  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:45.755076  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:45.746538    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.747381    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749046    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749635    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.751252    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:45.746538    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.747381    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749046    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.749635    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:45.751252    4423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:45.755099  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:45.755114  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:45.784001  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:45.784034  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:45.837928  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:45.837964  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:45.914633  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:45.914670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:45.950096  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:45.950123  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:46.054149  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:46.054194  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:46.072594  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:46.072628  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:46.111999  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:46.112030  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:48.642924  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:48.653451  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:48.653519  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:48.679639  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:48.679659  306747 cri.go:89] found id: ""
	I1017 19:27:48.679667  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:48.679720  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.683701  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:48.683775  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:48.711679  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:48.711701  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:48.711707  306747 cri.go:89] found id: ""
	I1017 19:27:48.711714  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:48.711767  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.715462  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.718828  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:48.718914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:48.745090  306747 cri.go:89] found id: ""
	I1017 19:27:48.745156  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.745170  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:48.745178  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:48.745236  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:48.772250  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:48.772273  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:48.772278  306747 cri.go:89] found id: ""
	I1017 19:27:48.772286  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:48.772344  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.776030  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.779386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:48.779454  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:48.805859  306747 cri.go:89] found id: ""
	I1017 19:27:48.805884  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.805893  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:48.805900  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:48.805957  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:48.831953  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:48.831975  306747 cri.go:89] found id: ""
	I1017 19:27:48.831984  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:48.832040  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:48.835702  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:48.835770  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:48.869137  306747 cri.go:89] found id: ""
	I1017 19:27:48.869159  306747 logs.go:282] 0 containers: []
	W1017 19:27:48.869168  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:48.869177  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:48.869190  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:48.910676  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:48.910711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:48.972655  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:48.972690  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:49.013320  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:49.013350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:49.093756  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:49.093796  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:49.137959  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:49.137988  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:49.207174  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:49.198952    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.199631    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201291    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201757    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.203195    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:49.198952    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.199631    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201291    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.201757    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:49.203195    4589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:49.207199  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:49.207215  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:49.255066  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:49.255135  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:49.283732  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:49.283760  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:49.395846  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:49.395882  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:49.414130  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:49.414161  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:51.941734  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:51.953584  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:51.953657  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:51.984051  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:51.984073  306747 cri.go:89] found id: ""
	I1017 19:27:51.984081  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:51.984225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:51.989195  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:51.989276  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:52.018264  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:52.018291  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:52.018296  306747 cri.go:89] found id: ""
	I1017 19:27:52.018305  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:52.018390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.022319  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.026112  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:52.026196  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:52.054070  306747 cri.go:89] found id: ""
	I1017 19:27:52.054097  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.054107  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:52.054114  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:52.054234  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:52.091016  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:52.091040  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:52.091045  306747 cri.go:89] found id: ""
	I1017 19:27:52.091052  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:52.091109  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.095213  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.098982  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:52.099079  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:52.126556  306747 cri.go:89] found id: ""
	I1017 19:27:52.126590  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.126601  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:52.126607  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:52.126676  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:52.158449  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:52.158473  306747 cri.go:89] found id: ""
	I1017 19:27:52.158482  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:52.158543  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:52.162572  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:52.162647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:52.192007  306747 cri.go:89] found id: ""
	I1017 19:27:52.192033  306747 logs.go:282] 0 containers: []
	W1017 19:27:52.192042  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:52.192052  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:52.192066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:52.209934  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:52.209966  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:52.285387  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:52.276095    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.276908    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.278520    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.279497    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.280119    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:52.276095    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.276908    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.278520    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.279497    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:52.280119    4697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:52.285410  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:52.285426  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:52.314784  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:52.314812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:52.349858  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:52.349896  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:52.417120  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:52.417160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:52.447498  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:52.447525  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:52.525405  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:52.525442  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:52.568336  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:52.568364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:52.667592  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:52.667629  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:52.714508  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:52.714544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.241965  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:55.252843  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:55.252914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:55.281150  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:55.281173  306747 cri.go:89] found id: ""
	I1017 19:27:55.281181  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:55.281254  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.285436  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:55.285508  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:55.311561  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:55.311585  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:55.311590  306747 cri.go:89] found id: ""
	I1017 19:27:55.311598  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:55.311654  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.315303  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.318720  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:55.318789  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:55.342910  306747 cri.go:89] found id: ""
	I1017 19:27:55.342937  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.342946  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:55.342953  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:55.343012  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:55.369108  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:55.369130  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:55.369136  306747 cri.go:89] found id: ""
	I1017 19:27:55.369154  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:55.369212  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.372980  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.376499  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:55.376598  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:55.409872  306747 cri.go:89] found id: ""
	I1017 19:27:55.409898  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.409907  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:55.409914  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:55.409970  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:55.435703  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.435725  306747 cri.go:89] found id: ""
	I1017 19:27:55.435734  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:55.435787  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:55.439520  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:55.439587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:55.466991  306747 cri.go:89] found id: ""
	I1017 19:27:55.467017  306747 logs.go:282] 0 containers: []
	W1017 19:27:55.467026  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:55.467036  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:55.467048  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:55.492985  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:55.493014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:55.566914  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:55.566950  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:55.643727  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:55.635444    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.636184    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.637061    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638074    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638650    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:55.635444    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.636184    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.637061    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638074    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:55.638650    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:55.643796  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:55.643817  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:55.670365  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:55.670394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:55.705898  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:55.705936  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:55.732124  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:55.732152  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:55.762958  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:55.762987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:55.857491  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:55.857528  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:27:55.875620  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:55.875658  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:55.953454  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:55.953501  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:58.520452  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:27:58.530935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:27:58.531015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:27:58.557433  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:58.557455  306747 cri.go:89] found id: ""
	I1017 19:27:58.557464  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:27:58.557521  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.561276  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:27:58.561345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:27:58.587982  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:58.588006  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:58.588011  306747 cri.go:89] found id: ""
	I1017 19:27:58.588018  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:27:58.588072  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.591894  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.595410  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:27:58.595490  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:27:58.620930  306747 cri.go:89] found id: ""
	I1017 19:27:58.620956  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.620966  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:27:58.620972  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:27:58.621038  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:27:58.646484  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:58.646509  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:58.646514  306747 cri.go:89] found id: ""
	I1017 19:27:58.646522  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:27:58.646573  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.650281  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.653491  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:27:58.653564  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:27:58.679227  306747 cri.go:89] found id: ""
	I1017 19:27:58.679251  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.679261  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:27:58.679271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:27:58.679329  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:27:58.712878  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:58.712901  306747 cri.go:89] found id: ""
	I1017 19:27:58.712910  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:27:58.712965  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:27:58.717668  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:27:58.717744  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:27:58.743926  306747 cri.go:89] found id: ""
	I1017 19:27:58.743950  306747 logs.go:282] 0 containers: []
	W1017 19:27:58.743960  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:27:58.743969  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:27:58.743981  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:27:58.816251  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:27:58.808176    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.809065    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810666    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810959    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.812492    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:27:58.808176    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.809065    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810666    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.810959    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:27:58.812492    4969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:27:58.816275  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:27:58.816289  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:27:58.880149  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:27:58.880187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:27:58.926347  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:27:58.926379  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:27:58.959298  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:27:58.959326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:27:58.985914  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:27:58.985941  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:27:59.060169  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:27:59.060206  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:27:59.098174  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:27:59.098204  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:27:59.193263  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:27:59.193298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:27:59.223428  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:27:59.223461  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:27:59.282679  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:27:59.282714  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:01.802237  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:01.814388  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:01.814466  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:01.840376  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:01.840398  306747 cri.go:89] found id: ""
	I1017 19:28:01.840412  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:01.840465  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.844426  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:01.844496  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:01.873063  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:01.873085  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:01.873090  306747 cri.go:89] found id: ""
	I1017 19:28:01.873098  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:01.873155  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.877190  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.881085  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:01.881173  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:01.908701  306747 cri.go:89] found id: ""
	I1017 19:28:01.908726  306747 logs.go:282] 0 containers: []
	W1017 19:28:01.908736  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:01.908742  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:01.908799  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:01.936306  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:01.936330  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:01.936335  306747 cri.go:89] found id: ""
	I1017 19:28:01.936343  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:01.936397  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.940768  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:01.946060  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:01.946131  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:01.974191  306747 cri.go:89] found id: ""
	I1017 19:28:01.974217  306747 logs.go:282] 0 containers: []
	W1017 19:28:01.974227  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:01.974234  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:01.974299  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:02.003021  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:02.003047  306747 cri.go:89] found id: ""
	I1017 19:28:02.003056  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:02.003132  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:02.016728  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:02.016803  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:02.046662  306747 cri.go:89] found id: ""
	I1017 19:28:02.046688  306747 logs.go:282] 0 containers: []
	W1017 19:28:02.046697  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:02.046708  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:02.046744  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:02.076638  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:02.076670  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:02.097353  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:02.097384  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:02.149812  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:02.149852  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:02.212958  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:02.212995  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:02.242664  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:02.242692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:02.329225  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:02.329262  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:02.364870  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:02.364906  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:02.472339  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:02.472377  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:02.541865  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:02.533392    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.534027    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.535792    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.536454    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.537580    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:02.533392    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.534027    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.535792    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.536454    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:02.537580    5158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:02.541887  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:02.541900  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:02.570859  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:02.570888  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.110395  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:05.121645  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:05.121716  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:05.153742  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:05.153766  306747 cri.go:89] found id: ""
	I1017 19:28:05.153775  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:05.153829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.157576  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:05.157647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:05.184788  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:05.184810  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.184815  306747 cri.go:89] found id: ""
	I1017 19:28:05.184823  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:05.184878  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.188586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.192151  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:05.192222  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:05.222405  306747 cri.go:89] found id: ""
	I1017 19:28:05.222437  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.222447  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:05.222453  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:05.222512  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:05.251383  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:05.251408  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:05.251413  306747 cri.go:89] found id: ""
	I1017 19:28:05.251421  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:05.251474  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.255443  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.258903  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:05.258971  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:05.289906  306747 cri.go:89] found id: ""
	I1017 19:28:05.289983  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.289999  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:05.290007  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:05.290065  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:05.317057  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:05.317122  306747 cri.go:89] found id: ""
	I1017 19:28:05.317136  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:05.317202  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:05.320997  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:05.321071  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:05.350310  306747 cri.go:89] found id: ""
	I1017 19:28:05.350335  306747 logs.go:282] 0 containers: []
	W1017 19:28:05.350344  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:05.350353  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:05.350364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:05.387607  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:05.387637  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:05.456949  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:05.448355    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.449098    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.450777    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.451358    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.452970    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:05.448355    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.449098    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.450777    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.451358    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:05.452970    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:05.457018  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:05.457045  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:05.484064  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:05.484139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:05.543816  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:05.543851  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:05.573032  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:05.573058  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:05.651816  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:05.651853  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:05.753730  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:05.753765  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:05.772288  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:05.772320  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:05.827946  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:05.827982  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:05.872696  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:05.872731  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.406970  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:08.417284  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:08.417352  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:08.443772  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:08.443796  306747 cri.go:89] found id: ""
	I1017 19:28:08.443815  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:08.443868  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.447541  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:08.447633  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:08.472976  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:08.473004  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:08.473009  306747 cri.go:89] found id: ""
	I1017 19:28:08.473017  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:08.473070  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.476664  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.480025  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:08.480095  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:08.507100  306747 cri.go:89] found id: ""
	I1017 19:28:08.507122  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.507130  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:08.507136  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:08.507194  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:08.532864  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:08.532888  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.532895  306747 cri.go:89] found id: ""
	I1017 19:28:08.532912  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:08.532966  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.536602  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.540037  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:08.540108  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:08.566233  306747 cri.go:89] found id: ""
	I1017 19:28:08.566258  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.566267  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:08.566273  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:08.566348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:08.593545  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:08.593568  306747 cri.go:89] found id: ""
	I1017 19:28:08.593577  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:08.593630  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:08.597170  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:08.597251  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:08.622805  306747 cri.go:89] found id: ""
	I1017 19:28:08.622829  306747 logs.go:282] 0 containers: []
	W1017 19:28:08.622838  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:08.622847  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:08.622886  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:08.718117  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:08.718158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:08.736317  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:08.736358  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:08.785165  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:08.785200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:08.813123  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:08.813154  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:08.842670  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:08.842698  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:08.883049  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:08.883081  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:08.948658  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:08.940826    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.941602    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943150    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943452    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.944921    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:08.940826    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.941602    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943150    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.943452    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:08.944921    5423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:08.948680  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:08.948693  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:08.975235  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:08.975261  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:09.023572  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:09.023607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:09.085674  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:09.085713  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:11.674341  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:11.684867  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:11.684937  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:11.710235  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:11.710258  306747 cri.go:89] found id: ""
	I1017 19:28:11.710266  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:11.710317  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.713823  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:11.713893  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:11.743536  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:11.743557  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:11.743564  306747 cri.go:89] found id: ""
	I1017 19:28:11.743571  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:11.743623  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.747225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.750360  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:11.750423  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:11.775489  306747 cri.go:89] found id: ""
	I1017 19:28:11.775553  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.775575  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:11.775599  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:11.775689  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:11.804973  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:11.804993  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:11.804999  306747 cri.go:89] found id: ""
	I1017 19:28:11.805007  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:11.805064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.809085  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.812425  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:11.812493  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:11.839019  306747 cri.go:89] found id: ""
	I1017 19:28:11.839042  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.839051  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:11.839057  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:11.839113  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:11.867946  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:11.868012  306747 cri.go:89] found id: ""
	I1017 19:28:11.868036  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:11.868125  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:11.871735  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:11.871847  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:11.917369  306747 cri.go:89] found id: ""
	I1017 19:28:11.917435  306747 logs.go:282] 0 containers: []
	W1017 19:28:11.917448  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:11.917458  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:11.917473  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:12.015837  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:12.015876  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:12.037612  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:12.037645  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:12.066665  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:12.066695  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:12.124283  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:12.124321  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:12.157456  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:12.157487  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:12.218566  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:12.218603  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:12.246576  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:12.246601  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:12.323228  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:12.323263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:12.389358  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:12.381335    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.382085    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.383576    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.384016    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.385432    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:12.381335    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.382085    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.383576    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.384016    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:12.385432    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:12.389381  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:12.389394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:12.420218  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:12.420248  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:14.967518  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:14.978398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:14.978489  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:15.008833  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:15.008861  306747 cri.go:89] found id: ""
	I1017 19:28:15.008869  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:15.008962  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.019024  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:15.019115  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:15.048619  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:15.048641  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:15.048646  306747 cri.go:89] found id: ""
	I1017 19:28:15.048653  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:15.048711  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.052829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.056849  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:15.056960  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:15.090614  306747 cri.go:89] found id: ""
	I1017 19:28:15.090646  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.090670  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:15.090679  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:15.090755  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:15.121287  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:15.121354  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:15.121367  306747 cri.go:89] found id: ""
	I1017 19:28:15.121376  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:15.121441  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.126749  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.130705  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:15.130786  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:15.158437  306747 cri.go:89] found id: ""
	I1017 19:28:15.158462  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.158472  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:15.158479  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:15.158542  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:15.187795  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:15.187819  306747 cri.go:89] found id: ""
	I1017 19:28:15.187828  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:15.187885  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:15.191939  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:15.192014  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:15.221830  306747 cri.go:89] found id: ""
	I1017 19:28:15.221856  306747 logs.go:282] 0 containers: []
	W1017 19:28:15.221866  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:15.221875  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:15.221886  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:15.314949  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:15.314983  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:15.334443  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:15.334524  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:15.391124  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:15.391159  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:15.464757  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:15.464794  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:15.499089  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:15.499118  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:15.572721  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:15.572758  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:15.604780  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:15.604809  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:15.673978  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:15.665870    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.666574    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668276    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668888    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.670272    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:15.665870    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.666574    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668276    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.668888    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:15.670272    5692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:15.674001  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:15.674014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:15.703550  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:15.703577  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:15.736137  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:15.736167  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.272459  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:18.284130  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:18.284202  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:18.317045  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:18.317114  306747 cri.go:89] found id: ""
	I1017 19:28:18.317140  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:18.317200  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.320946  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:18.321021  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:18.349966  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:18.350047  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:18.350069  306747 cri.go:89] found id: ""
	I1017 19:28:18.350078  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:18.350146  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.354094  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.357736  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:18.357840  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:18.389890  306747 cri.go:89] found id: ""
	I1017 19:28:18.389914  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.389923  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:18.389929  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:18.389990  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:18.416552  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:18.416573  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.416577  306747 cri.go:89] found id: ""
	I1017 19:28:18.416584  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:18.416636  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.421408  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.425021  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:18.425127  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:18.451716  306747 cri.go:89] found id: ""
	I1017 19:28:18.451744  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.451754  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:18.451760  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:18.451824  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:18.486286  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:18.486355  306747 cri.go:89] found id: ""
	I1017 19:28:18.486370  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:18.486424  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:18.490097  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:18.490214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:18.517834  306747 cri.go:89] found id: ""
	I1017 19:28:18.517859  306747 logs.go:282] 0 containers: []
	W1017 19:28:18.517868  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:18.517877  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:18.517907  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:18.569373  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:18.569412  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:18.597414  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:18.597442  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:18.615623  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:18.615651  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:18.687384  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:18.679364    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.680188    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.681715    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.682200    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.683729    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:18.679364    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.680188    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.681715    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.682200    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:18.683729    5804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:18.687406  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:18.687420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:18.724107  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:18.724135  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:18.757798  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:18.757832  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:18.823518  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:18.823556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:18.868332  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:18.868358  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:18.948355  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:18.948391  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:18.980022  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:18.980052  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:21.580647  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:21.591760  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:21.591828  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:21.619734  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:21.619755  306747 cri.go:89] found id: ""
	I1017 19:28:21.619763  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:21.619822  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.623634  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:21.623706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:21.650174  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:21.650202  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:21.650207  306747 cri.go:89] found id: ""
	I1017 19:28:21.650215  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:21.650275  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.654337  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.658320  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:21.658390  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:21.685562  306747 cri.go:89] found id: ""
	I1017 19:28:21.685587  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.685596  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:21.685602  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:21.685696  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:21.711151  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:21.711175  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:21.711180  306747 cri.go:89] found id: ""
	I1017 19:28:21.711188  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:21.711241  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.714981  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.718517  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:21.718587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:21.745770  306747 cri.go:89] found id: ""
	I1017 19:28:21.745796  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.745805  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:21.745812  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:21.745872  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:21.773020  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:21.773042  306747 cri.go:89] found id: ""
	I1017 19:28:21.773052  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:21.773107  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:21.776980  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:21.777073  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:21.805110  306747 cri.go:89] found id: ""
	I1017 19:28:21.805137  306747 logs.go:282] 0 containers: []
	W1017 19:28:21.805146  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:21.805156  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:21.805187  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:21.915295  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:21.915339  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:21.934521  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:21.934553  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:21.971829  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:21.971867  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:22.032460  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:22.032500  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:22.069813  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:22.069901  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:22.150515  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:22.150553  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:22.186817  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:22.186843  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:22.250982  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:22.242783    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.243418    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.244975    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.245572    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.247184    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:22.242783    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.243418    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.244975    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.245572    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:22.247184    5968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:22.251005  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:22.251019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:22.318367  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:22.318403  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:22.359962  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:22.359991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:24.888496  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:24.899632  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:24.899701  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:24.927106  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:24.927126  306747 cri.go:89] found id: ""
	I1017 19:28:24.927135  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:24.927191  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.930789  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:24.930901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:24.957962  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:24.957986  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:24.957992  306747 cri.go:89] found id: ""
	I1017 19:28:24.958000  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:24.958052  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.961689  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:24.965312  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:24.965388  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:24.999567  306747 cri.go:89] found id: ""
	I1017 19:28:24.999646  306747 logs.go:282] 0 containers: []
	W1017 19:28:24.999670  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:24.999692  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:24.999784  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:25.030377  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:25.030447  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:25.030466  306747 cri.go:89] found id: ""
	I1017 19:28:25.030493  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:25.030587  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.034492  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.038213  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:25.038307  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:25.064926  306747 cri.go:89] found id: ""
	I1017 19:28:25.065005  306747 logs.go:282] 0 containers: []
	W1017 19:28:25.065022  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:25.065029  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:25.065092  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:25.104761  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:25.104835  306747 cri.go:89] found id: ""
	I1017 19:28:25.104851  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:25.104908  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:25.109062  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:25.109153  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:25.137891  306747 cri.go:89] found id: ""
	I1017 19:28:25.137923  306747 logs.go:282] 0 containers: []
	W1017 19:28:25.137931  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:25.137940  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:25.137953  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:25.170975  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:25.171007  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:25.204002  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:25.204031  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:25.297840  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:25.297914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:25.315642  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:25.315682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:25.369974  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:25.370011  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:25.452713  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:25.452749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:25.483409  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:25.483439  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:25.558385  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:25.550412    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.551034    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.552731    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.553294    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.554883    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:25.550412    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.551034    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.552731    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.553294    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:25.554883    6103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:25.558408  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:25.558421  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:25.585961  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:25.585989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:25.617689  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:25.617720  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.181797  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:28.193078  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:28.193193  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:28.220858  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:28.220880  306747 cri.go:89] found id: ""
	I1017 19:28:28.220889  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:28.220949  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.224889  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:28.224962  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:28.256761  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:28.256782  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:28.256787  306747 cri.go:89] found id: ""
	I1017 19:28:28.256795  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:28.256849  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.261049  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.264952  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:28.265076  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:28.291441  306747 cri.go:89] found id: ""
	I1017 19:28:28.291509  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.291533  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:28.291556  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:28.291641  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:28.318704  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.318768  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:28.318790  306747 cri.go:89] found id: ""
	I1017 19:28:28.318815  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:28.318904  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.323349  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.327034  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:28.327096  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:28.357958  306747 cri.go:89] found id: ""
	I1017 19:28:28.357983  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.357992  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:28.358001  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:28.358059  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:28.384163  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:28.384187  306747 cri.go:89] found id: ""
	I1017 19:28:28.384196  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:28.384262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:28.387976  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:28.388088  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:28.414600  306747 cri.go:89] found id: ""
	I1017 19:28:28.414625  306747 logs.go:282] 0 containers: []
	W1017 19:28:28.414635  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:28.414644  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:28.414655  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:28.478712  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:28.469484    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.470334    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.472333    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.473060    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.474868    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:28.469484    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.470334    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.472333    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.473060    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:28.474868    6198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:28.478736  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:28.478749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:28.504392  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:28.504432  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:28.566111  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:28.566147  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:28.597513  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:28.597544  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:28.676314  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:28.676352  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:28.779140  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:28.779181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:28.830823  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:28.830858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:28.873192  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:28.873224  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:28.907594  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:28.907621  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:28.939159  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:28.939188  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:31.457173  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:31.468390  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:31.468462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:31.500159  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:31.500183  306747 cri.go:89] found id: ""
	I1017 19:28:31.500191  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:31.500245  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.503981  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:31.504051  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:31.529707  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:31.529735  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:31.529740  306747 cri.go:89] found id: ""
	I1017 19:28:31.529748  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:31.529810  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.533478  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.536973  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:31.537042  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:31.562894  306747 cri.go:89] found id: ""
	I1017 19:28:31.562920  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.562929  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:31.562936  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:31.562996  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:31.591920  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:31.591943  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:31.591949  306747 cri.go:89] found id: ""
	I1017 19:28:31.591956  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:31.592011  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.595596  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.598999  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:31.599093  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:31.631142  306747 cri.go:89] found id: ""
	I1017 19:28:31.631164  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.631173  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:31.631179  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:31.631264  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:31.657995  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:31.658017  306747 cri.go:89] found id: ""
	I1017 19:28:31.658026  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:31.658077  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:31.661797  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:31.661866  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:31.687995  306747 cri.go:89] found id: ""
	I1017 19:28:31.688019  306747 logs.go:282] 0 containers: []
	W1017 19:28:31.688028  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:31.688037  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:31.688049  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:31.714258  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:31.714288  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:31.743480  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:31.743510  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:31.839126  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:31.839165  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:31.865944  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:31.865971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:31.923800  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:31.923834  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:32.015198  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:32.015258  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:32.108618  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:32.108656  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:32.127026  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:32.127056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:32.197465  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:32.189288    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.190038    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191643    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191956    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.193464    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:32.189288    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.190038    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191643    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.191956    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:32.193464    6386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:32.197487  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:32.197501  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:32.230297  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:32.230333  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:34.763313  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:34.773938  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:34.774008  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:34.801473  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:34.801491  306747 cri.go:89] found id: ""
	I1017 19:28:34.801498  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:34.801568  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.805380  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:34.805451  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:34.831939  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:34.831964  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:34.831968  306747 cri.go:89] found id: ""
	I1017 19:28:34.831976  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:34.832034  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.836223  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.839881  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:34.839985  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:34.867700  306747 cri.go:89] found id: ""
	I1017 19:28:34.867725  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.867735  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:34.867741  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:34.867826  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:34.898720  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:34.898743  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:34.898748  306747 cri.go:89] found id: ""
	I1017 19:28:34.898756  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:34.898827  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.902459  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.905896  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:34.905974  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:34.933166  306747 cri.go:89] found id: ""
	I1017 19:28:34.933242  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.933258  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:34.933266  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:34.933326  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:34.961978  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:34.962067  306747 cri.go:89] found id: ""
	I1017 19:28:34.962091  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:34.962173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:34.966069  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:34.966147  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:34.993526  306747 cri.go:89] found id: ""
	I1017 19:28:34.993565  306747 logs.go:282] 0 containers: []
	W1017 19:28:34.993574  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:34.993583  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:34.993594  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:35.023086  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:35.023173  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:35.057614  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:35.057652  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:35.126909  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:35.126944  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:35.207646  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:35.207681  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:35.240791  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:35.240824  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:35.259253  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:35.259285  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:35.327544  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:35.319793    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.320443    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.321977    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.322405    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.323890    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:35.319793    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.320443    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.321977    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.322405    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:35.323890    6514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:35.327566  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:35.327579  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:35.377112  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:35.377150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:35.405892  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:35.405920  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:35.431201  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:35.431230  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:38.030766  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:38.042946  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:38.043015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:38.074181  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:38.074215  306747 cri.go:89] found id: ""
	I1017 19:28:38.074224  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:38.074287  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.079011  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:38.079083  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:38.108493  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:38.108592  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:38.108612  306747 cri.go:89] found id: ""
	I1017 19:28:38.108636  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:38.108721  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.112489  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.115918  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:38.116030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:38.146192  306747 cri.go:89] found id: ""
	I1017 19:28:38.146215  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.146225  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:38.146233  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:38.146315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:38.178299  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:38.178363  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:38.178375  306747 cri.go:89] found id: ""
	I1017 19:28:38.178382  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:38.178438  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.182144  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.185723  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:38.185785  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:38.210486  306747 cri.go:89] found id: ""
	I1017 19:28:38.210509  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.210518  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:38.210524  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:38.210578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:38.240550  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:38.240573  306747 cri.go:89] found id: ""
	I1017 19:28:38.240581  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:38.240633  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:38.246616  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:38.246710  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:38.272684  306747 cri.go:89] found id: ""
	I1017 19:28:38.272710  306747 logs.go:282] 0 containers: []
	W1017 19:28:38.272719  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:38.272728  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:38.272759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:38.291309  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:38.291338  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:38.362093  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:38.354481    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.355177    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.356720    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.357017    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.358292    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:38.354481    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.355177    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.356720    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.357017    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:38.358292    6613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:38.362115  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:38.362136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:38.388487  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:38.388541  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:38.460507  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:38.460545  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:38.493438  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:38.493472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:38.519348  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:38.519378  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:38.547771  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:38.547800  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:38.646739  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:38.646779  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:38.711727  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:38.711765  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:38.794605  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:38.794645  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:41.329100  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:41.340102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:41.340191  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:41.378237  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:41.378304  306747 cri.go:89] found id: ""
	I1017 19:28:41.378327  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:41.378411  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.382295  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:41.382433  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:41.413432  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:41.413454  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:41.413459  306747 cri.go:89] found id: ""
	I1017 19:28:41.413483  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:41.413541  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.417349  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.420940  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:41.421030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:41.447730  306747 cri.go:89] found id: ""
	I1017 19:28:41.447754  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.447763  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:41.447769  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:41.447917  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:41.473491  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:41.473514  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:41.473520  306747 cri.go:89] found id: ""
	I1017 19:28:41.473527  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:41.473602  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.477615  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.481139  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:41.481211  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:41.507258  306747 cri.go:89] found id: ""
	I1017 19:28:41.507283  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.507292  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:41.507300  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:41.507356  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:41.537051  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:41.537073  306747 cri.go:89] found id: ""
	I1017 19:28:41.537082  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:41.537134  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:41.540852  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:41.540920  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:41.567361  306747 cri.go:89] found id: ""
	I1017 19:28:41.567389  306747 logs.go:282] 0 containers: []
	W1017 19:28:41.567398  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:41.567407  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:41.567419  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:41.599142  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:41.599172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:41.635743  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:41.635773  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:41.654302  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:41.654331  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:41.717143  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:41.717179  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:41.792345  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:41.792380  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:41.871479  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:41.871517  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:41.975433  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:41.975512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:42.054059  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:42.044191    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.045351    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.046050    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.047965    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.048651    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:42.044191    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.045351    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.046050    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.047965    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:42.048651    6790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:42.054083  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:42.054106  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:42.089914  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:42.089944  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:42.149148  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:42.149200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:44.709425  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:44.719908  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:44.719977  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:44.763510  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:44.763534  306747 cri.go:89] found id: ""
	I1017 19:28:44.763541  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:44.763594  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.767241  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:44.767313  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:44.795651  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:44.795675  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:44.795681  306747 cri.go:89] found id: ""
	I1017 19:28:44.795689  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:44.795742  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.800272  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.804452  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:44.804565  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:44.839339  306747 cri.go:89] found id: ""
	I1017 19:28:44.839371  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.839379  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:44.839386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:44.839452  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:44.875066  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:44.875099  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:44.875105  306747 cri.go:89] found id: ""
	I1017 19:28:44.875139  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:44.875214  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.880309  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.883914  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:44.884020  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:44.917517  306747 cri.go:89] found id: ""
	I1017 19:28:44.917586  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.917614  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:44.917638  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:44.917727  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:44.946317  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:44.946393  306747 cri.go:89] found id: ""
	I1017 19:28:44.946416  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:44.946496  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:44.950194  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:44.950311  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:44.976935  306747 cri.go:89] found id: ""
	I1017 19:28:44.977000  306747 logs.go:282] 0 containers: []
	W1017 19:28:44.977027  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:44.977054  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:44.977071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:45.083362  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:45.083465  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:45.185240  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:45.174155    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.175051    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.176949    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178114    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178917    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:45.174155    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.175051    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.176949    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178114    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:45.178917    6887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:45.185281  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:45.185298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:45.229219  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:45.229247  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:45.303101  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:45.303141  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:45.395057  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:45.395208  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:45.422882  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:45.422938  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:45.465002  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:45.465035  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:45.501568  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:45.501600  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:45.530952  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:45.530983  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:45.610519  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:45.610560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:48.146542  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:48.158014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:48.158095  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:48.185610  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:48.185676  306747 cri.go:89] found id: ""
	I1017 19:28:48.185699  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:48.185773  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.189874  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:48.189975  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:48.216931  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:48.216997  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:48.217020  306747 cri.go:89] found id: ""
	I1017 19:28:48.217044  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:48.217112  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.220961  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.224622  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:48.224715  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:48.254633  306747 cri.go:89] found id: ""
	I1017 19:28:48.254660  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.254669  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:48.254676  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:48.254759  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:48.280918  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:48.280996  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:48.281017  306747 cri.go:89] found id: ""
	I1017 19:28:48.281033  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:48.281101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.285444  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.289246  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:48.289369  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:48.317150  306747 cri.go:89] found id: ""
	I1017 19:28:48.317216  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.317244  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:48.317275  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:48.317350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:48.347609  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:48.347643  306747 cri.go:89] found id: ""
	I1017 19:28:48.347652  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:48.347704  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:48.351509  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:48.351584  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:48.376680  306747 cri.go:89] found id: ""
	I1017 19:28:48.376708  306747 logs.go:282] 0 containers: []
	W1017 19:28:48.376716  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:48.376726  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:48.376738  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:48.452752  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:48.452788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:48.484352  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:48.484382  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:48.510315  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:48.510344  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:48.571544  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:48.571578  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:48.609922  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:48.609951  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:48.642129  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:48.642158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:48.737103  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:48.737139  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:48.755251  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:48.755324  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:48.826596  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:48.817740    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.818885    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.819683    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.820717    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.821339    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:48.817740    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.818885    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.819683    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.820717    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:48.821339    7075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:48.826621  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:48.826676  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:48.917412  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:48.917447  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.447884  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:51.458905  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:51.458975  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:51.486341  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:51.486364  306747 cri.go:89] found id: ""
	I1017 19:28:51.486373  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:51.486435  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.490132  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:51.490214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:51.515926  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:51.515950  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:51.515956  306747 cri.go:89] found id: ""
	I1017 19:28:51.515964  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:51.516033  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.520421  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.524078  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:51.524150  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:51.558659  306747 cri.go:89] found id: ""
	I1017 19:28:51.558683  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.558693  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:51.558700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:51.558754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:51.584326  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:51.584349  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:51.584355  306747 cri.go:89] found id: ""
	I1017 19:28:51.584362  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:51.584417  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.588059  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.591616  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:51.591692  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:51.621537  306747 cri.go:89] found id: ""
	I1017 19:28:51.621562  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.621571  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:51.621577  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:51.621634  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:51.648966  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.648994  306747 cri.go:89] found id: ""
	I1017 19:28:51.649002  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:51.649064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:51.652867  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:51.652934  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:51.685921  306747 cri.go:89] found id: ""
	I1017 19:28:51.685944  306747 logs.go:282] 0 containers: []
	W1017 19:28:51.685953  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:51.685962  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:51.685973  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:51.759988  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:51.760023  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:51.846069  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:51.835717    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.836264    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.837776    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.840665    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.841647    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:51.835717    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.836264    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.837776    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.840665    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:51.841647    7164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:51.846090  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:51.846105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:51.875253  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:51.875281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:51.929449  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:51.929478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:52.036309  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:52.036348  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:52.054743  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:52.054772  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:52.088833  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:52.088860  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:52.157298  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:52.157332  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:52.199361  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:52.199392  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:52.268239  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:52.268286  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:54.799369  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:54.809961  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:54.810031  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:54.836137  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:54.836157  306747 cri.go:89] found id: ""
	I1017 19:28:54.836167  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:54.836220  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.839841  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:54.839912  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:54.873358  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:54.873379  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:54.873383  306747 cri.go:89] found id: ""
	I1017 19:28:54.873391  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:54.873445  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.877284  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.881090  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:54.881164  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:54.908431  306747 cri.go:89] found id: ""
	I1017 19:28:54.908456  306747 logs.go:282] 0 containers: []
	W1017 19:28:54.908465  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:54.908471  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:54.908607  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:54.935825  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:54.935845  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:54.935850  306747 cri.go:89] found id: ""
	I1017 19:28:54.935857  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:54.935913  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.939621  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:54.943502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:54.943577  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:54.973718  306747 cri.go:89] found id: ""
	I1017 19:28:54.973742  306747 logs.go:282] 0 containers: []
	W1017 19:28:54.973751  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:54.973757  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:54.973818  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:55.004781  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:55.004802  306747 cri.go:89] found id: ""
	I1017 19:28:55.004818  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:55.004885  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:55.015050  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:55.015136  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:55.043899  306747 cri.go:89] found id: ""
	I1017 19:28:55.043966  306747 logs.go:282] 0 containers: []
	W1017 19:28:55.043988  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:55.044013  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:55.044056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:55.097224  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:55.097263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:55.126143  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:55.126175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:55.170272  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:55.170302  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:55.190816  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:55.190846  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:55.229778  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:55.229815  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:55.296882  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:55.296954  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:55.322920  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:55.322960  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:28:55.398513  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:55.398549  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:55.499678  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:55.499714  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:55.563984  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:55.555178    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.556013    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.557806    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.558580    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.560270    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:55.555178    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.556013    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.557806    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.558580    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:55.560270    7357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:55.564010  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:55.564024  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.090313  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:28:58.101520  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:28:58.101590  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:28:58.135133  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.135155  306747 cri.go:89] found id: ""
	I1017 19:28:58.135165  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:28:58.135217  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.139309  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:28:58.139381  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:28:58.166722  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:58.166743  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:58.166749  306747 cri.go:89] found id: ""
	I1017 19:28:58.166757  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:28:58.166829  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.170644  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.174541  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:28:58.174614  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:28:58.200707  306747 cri.go:89] found id: ""
	I1017 19:28:58.200733  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.200741  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:28:58.200748  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:28:58.200802  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:28:58.227069  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:58.227090  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:58.227095  306747 cri.go:89] found id: ""
	I1017 19:28:58.227102  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:28:58.227153  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.230793  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.234187  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:28:58.234268  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:28:58.260228  306747 cri.go:89] found id: ""
	I1017 19:28:58.260255  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.260264  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:28:58.260271  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:28:58.260330  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:28:58.287560  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:58.287582  306747 cri.go:89] found id: ""
	I1017 19:28:58.287590  306747 logs.go:282] 1 containers: [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:28:58.287642  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:28:58.291431  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:28:58.291498  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:28:58.319091  306747 cri.go:89] found id: ""
	I1017 19:28:58.319116  306747 logs.go:282] 0 containers: []
	W1017 19:28:58.319125  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:28:58.319133  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:28:58.319144  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:28:58.357128  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:28:58.357156  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:28:58.457940  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:28:58.457987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:28:58.477285  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:28:58.477363  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:28:58.553846  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:28:58.545334    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.546110    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.547791    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.548153    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.549602    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:28:58.545334    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.546110    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.547791    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.548153    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:28:58.549602    7453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:28:58.553942  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:28:58.553987  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:28:58.588733  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:28:58.588806  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:28:58.615167  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:28:58.615234  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:28:58.668448  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:28:58.668480  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:28:58.701507  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:28:58.701539  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:28:58.772475  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:28:58.772512  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:28:58.800891  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:28:58.800921  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:01.380664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:01.397862  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:01.397929  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:01.438317  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:01.438341  306747 cri.go:89] found id: ""
	I1017 19:29:01.438349  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:01.438408  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.448585  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:01.448665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:01.480947  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:01.480971  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:01.480978  306747 cri.go:89] found id: ""
	I1017 19:29:01.480985  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:01.481040  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.488101  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.493426  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:01.493541  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:01.529725  306747 cri.go:89] found id: ""
	I1017 19:29:01.529759  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.529767  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:01.529803  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:01.529888  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:01.570078  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:01.570130  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:01.570162  306747 cri.go:89] found id: ""
	I1017 19:29:01.570347  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:01.570572  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.580262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.584761  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:01.584865  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:01.619278  306747 cri.go:89] found id: ""
	I1017 19:29:01.619316  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.619326  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:01.619460  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:01.619709  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:01.668374  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:01.668398  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:01.668404  306747 cri.go:89] found id: ""
	I1017 19:29:01.668411  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:01.668500  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.672629  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:01.676472  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:01.676559  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:01.718877  306747 cri.go:89] found id: ""
	I1017 19:29:01.718901  306747 logs.go:282] 0 containers: []
	W1017 19:29:01.718911  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:01.718979  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:01.719003  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:01.786370  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:01.786448  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:01.835925  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:01.836009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:01.936969  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:01.937000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:01.985828  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:01.985857  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:02.036057  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:02.036090  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:02.088571  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:02.088600  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:02.183054  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:02.174539    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.175524    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177270    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177576    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.179060    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:02.174539    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.175524    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177270    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.177576    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:02.179060    7629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:02.183078  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:02.183094  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:02.214988  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:02.215019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:02.246207  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:02.246238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:02.338642  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:02.338682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:02.473356  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:02.473435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:04.994292  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:05.005817  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:05.005900  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:05.038175  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:05.038208  306747 cri.go:89] found id: ""
	I1017 19:29:05.038217  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:05.038276  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.042122  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:05.042193  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:05.072245  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:05.072271  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:05.072277  306747 cri.go:89] found id: ""
	I1017 19:29:05.072290  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:05.072369  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.085415  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.089790  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:05.089901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:05.126026  306747 cri.go:89] found id: ""
	I1017 19:29:05.126051  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.126059  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:05.126065  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:05.126129  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:05.157653  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:05.157689  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:05.157694  306747 cri.go:89] found id: ""
	I1017 19:29:05.157708  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:05.157780  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.162134  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.166047  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:05.166134  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:05.201222  306747 cri.go:89] found id: ""
	I1017 19:29:05.201247  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.201266  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:05.201291  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:05.201364  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:05.228323  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:05.228343  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:05.228348  306747 cri.go:89] found id: ""
	I1017 19:29:05.228355  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:05.228413  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.232758  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:05.236321  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:05.236407  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:05.264094  306747 cri.go:89] found id: ""
	I1017 19:29:05.264119  306747 logs.go:282] 0 containers: []
	W1017 19:29:05.264128  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:05.264137  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:05.264150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:05.289719  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:05.289749  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:05.341596  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:05.341632  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:05.385650  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:05.385681  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:05.455993  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:05.456032  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:05.482902  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:05.482967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:05.561357  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:05.561393  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:05.662914  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:05.662948  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:05.681986  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:05.682019  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:05.709932  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:05.709959  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:05.745521  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:05.745548  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:05.780007  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:05.780039  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:05.861169  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:05.844357    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.845194    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.846708    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.847144    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.849138    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:05.844357    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.845194    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.846708    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.847144    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:05.849138    7802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:08.361828  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:08.372509  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:08.372609  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:08.398614  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:08.398638  306747 cri.go:89] found id: ""
	I1017 19:29:08.398646  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:08.398707  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.402221  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:08.402294  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:08.426256  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:08.426278  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:08.426284  306747 cri.go:89] found id: ""
	I1017 19:29:08.426291  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:08.426341  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.429916  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.433518  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:08.433587  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:08.460461  306747 cri.go:89] found id: ""
	I1017 19:29:08.460487  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.460495  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:08.460502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:08.460591  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:08.488509  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:08.488562  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:08.488568  306747 cri.go:89] found id: ""
	I1017 19:29:08.488576  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:08.488628  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.492158  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.495581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:08.495647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:08.524899  306747 cri.go:89] found id: ""
	I1017 19:29:08.524920  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.524928  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:08.524934  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:08.524997  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:08.552958  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:08.552979  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:08.552984  306747 cri.go:89] found id: ""
	I1017 19:29:08.552991  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:08.553045  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.557091  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:08.560618  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:08.560683  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:08.587418  306747 cri.go:89] found id: ""
	I1017 19:29:08.587495  306747 logs.go:282] 0 containers: []
	W1017 19:29:08.587517  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:08.587557  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:08.587586  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:08.617740  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:08.617768  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:08.691709  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:08.691747  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:08.710175  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:08.710209  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:08.777270  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:08.777305  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:08.810729  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:08.810754  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:08.861497  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:08.861524  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:08.964232  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:08.964270  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:09.042894  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:09.034262    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.034773    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.036444    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.037159    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.038877    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:09.034262    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.034773    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.036444    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.037159    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:09.038877    7913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:09.042916  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:09.042941  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:09.067822  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:09.067849  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:09.107723  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:09.107755  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:09.186115  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:09.186151  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:11.716134  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:11.726531  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:11.726597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:11.752711  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:11.752733  306747 cri.go:89] found id: ""
	I1017 19:29:11.752741  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:11.752795  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.756278  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:11.756366  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:11.786396  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:11.786424  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:11.786430  306747 cri.go:89] found id: ""
	I1017 19:29:11.786439  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:11.786523  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.790327  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.794284  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:11.794350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:11.826413  306747 cri.go:89] found id: ""
	I1017 19:29:11.826437  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.826446  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:11.826452  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:11.826507  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:11.861782  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:11.861855  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:11.861875  306747 cri.go:89] found id: ""
	I1017 19:29:11.861900  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:11.861986  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.866376  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.870040  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:11.870106  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:11.902703  306747 cri.go:89] found id: ""
	I1017 19:29:11.902725  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.902739  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:11.902745  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:11.902803  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:11.932072  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:11.932141  306747 cri.go:89] found id: "01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:11.932161  306747 cri.go:89] found id: ""
	I1017 19:29:11.932186  306747 logs.go:282] 2 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c]
	I1017 19:29:11.932273  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.935981  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:11.939489  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:11.939560  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:11.975511  306747 cri.go:89] found id: ""
	I1017 19:29:11.975535  306747 logs.go:282] 0 containers: []
	W1017 19:29:11.975544  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:11.975553  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:11.975565  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:12.003072  306747 logs.go:123] Gathering logs for kube-controller-manager [01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c] ...
	I1017 19:29:12.003107  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 01dd3ffc34f3fa7e4aa162a5a58e93d7d5f69d2ccafd35789a1a5c6bbac9637c"
	I1017 19:29:12.038364  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:12.038400  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:12.116412  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:12.116450  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:12.147738  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:12.147766  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:12.245018  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:12.245053  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:12.262566  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:12.262641  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:12.312750  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:12.312785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:12.349963  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:12.349991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:12.419426  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:12.411356    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.411861    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.413495    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.414181    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.415507    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:12.411356    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.411861    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.413495    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.414181    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:12.415507    8065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:12.419456  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:12.419472  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:12.444065  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:12.444093  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:12.511165  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:12.511200  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.042908  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:15.054321  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:15.054394  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:15.089860  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:15.089886  306747 cri.go:89] found id: ""
	I1017 19:29:15.089895  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:15.089951  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.093678  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:15.093788  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:15.121746  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:15.121771  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:15.121776  306747 cri.go:89] found id: ""
	I1017 19:29:15.121784  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:15.121839  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.125790  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.129470  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:15.129544  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:15.156564  306747 cri.go:89] found id: ""
	I1017 19:29:15.156591  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.156600  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:15.156606  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:15.156665  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:15.189983  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:15.190010  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.190015  306747 cri.go:89] found id: ""
	I1017 19:29:15.190023  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:15.190113  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.194081  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.197983  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:15.198087  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:15.224673  306747 cri.go:89] found id: ""
	I1017 19:29:15.224701  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.224710  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:15.224716  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:15.224776  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:15.250249  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:15.250272  306747 cri.go:89] found id: ""
	I1017 19:29:15.250280  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:15.250336  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:15.254014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:15.254080  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:15.281235  306747 cri.go:89] found id: ""
	I1017 19:29:15.281313  306747 logs.go:282] 0 containers: []
	W1017 19:29:15.281337  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:15.281363  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:15.281395  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:15.385553  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:15.385599  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:15.411962  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:15.411991  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:15.455045  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:15.455073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:15.527131  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:15.527170  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:15.554497  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:15.554527  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:15.587137  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:15.587164  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:15.604763  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:15.604794  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:15.679834  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:15.670121    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.670686    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672157    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672558    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.674247    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:15.670121    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.670686    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672157    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.672558    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:15.674247    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:15.679857  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:15.679870  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:15.734902  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:15.734947  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:15.764734  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:15.764760  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:18.342635  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:18.353361  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:18.353435  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:18.380287  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:18.380311  306747 cri.go:89] found id: ""
	I1017 19:29:18.380319  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:18.380371  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.384298  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:18.384372  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:18.410566  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:18.410585  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:18.410590  306747 cri.go:89] found id: ""
	I1017 19:29:18.410597  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:18.410651  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.414392  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.417897  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:18.417969  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:18.447960  306747 cri.go:89] found id: ""
	I1017 19:29:18.447984  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.447992  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:18.447999  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:18.448054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:18.474020  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:18.474043  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:18.474049  306747 cri.go:89] found id: ""
	I1017 19:29:18.474059  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:18.474117  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.477723  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.481031  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:18.481111  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:18.508003  306747 cri.go:89] found id: ""
	I1017 19:29:18.508026  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.508034  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:18.508040  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:18.508123  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:18.535988  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:18.536017  306747 cri.go:89] found id: ""
	I1017 19:29:18.536026  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:18.536114  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:18.539822  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:18.539919  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:18.565247  306747 cri.go:89] found id: ""
	I1017 19:29:18.565271  306747 logs.go:282] 0 containers: []
	W1017 19:29:18.565279  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:18.565287  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:18.565340  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:18.590409  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:18.590435  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:18.664546  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:18.664583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:18.720073  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:18.720102  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:18.818026  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:18.818065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:18.838304  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:18.838335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:18.923376  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:18.914478    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.915271    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.916962    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.917666    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.919294    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:18.914478    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.915271    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.916962    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.917666    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:18.919294    8328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:18.923400  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:18.923413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:18.958683  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:18.958723  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:18.993098  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:18.993125  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:19.020011  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:19.020054  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:19.072525  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:19.072558  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:21.648626  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:21.658854  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:21.658923  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:21.686357  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:21.686380  306747 cri.go:89] found id: ""
	I1017 19:29:21.686388  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:21.686440  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.690383  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:21.690455  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:21.716829  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:21.716849  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:21.716854  306747 cri.go:89] found id: ""
	I1017 19:29:21.716861  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:21.716918  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.720495  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.723948  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:21.724016  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:21.751438  306747 cri.go:89] found id: ""
	I1017 19:29:21.751462  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.751471  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:21.751478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:21.751540  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:21.777499  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:21.777526  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:21.777531  306747 cri.go:89] found id: ""
	I1017 19:29:21.777539  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:21.777597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.781539  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.785454  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:21.785568  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:21.816183  306747 cri.go:89] found id: ""
	I1017 19:29:21.816248  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.816270  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:21.816292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:21.816377  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:21.854603  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:21.854670  306747 cri.go:89] found id: ""
	I1017 19:29:21.854695  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:21.854779  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:21.860948  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:21.861028  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:21.899847  306747 cri.go:89] found id: ""
	I1017 19:29:21.899871  306747 logs.go:282] 0 containers: []
	W1017 19:29:21.899879  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:21.899887  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:21.899899  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:21.958460  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:21.958497  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:22.040921  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:22.040958  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:22.070331  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:22.070410  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:22.149286  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:22.149326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:22.180733  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:22.180761  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:22.199492  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:22.199531  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:22.272753  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:22.265010    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.265612    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267150    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267571    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.269051    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:22.265010    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.265612    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267150    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.267571    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:22.269051    8480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:22.272779  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:22.272792  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:22.299733  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:22.299761  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:22.342105  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:22.342137  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:22.369741  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:22.369780  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:24.966101  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:24.976635  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:24.976715  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:25.022230  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:25.022256  306747 cri.go:89] found id: ""
	I1017 19:29:25.022267  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:25.022330  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.026476  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:25.026548  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:25.056264  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:25.056282  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:25.056287  306747 cri.go:89] found id: ""
	I1017 19:29:25.056295  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:25.056345  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.061372  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.064965  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:25.065034  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:25.104703  306747 cri.go:89] found id: ""
	I1017 19:29:25.104725  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.104734  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:25.104739  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:25.104799  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:25.137104  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:25.137128  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:25.137134  306747 cri.go:89] found id: ""
	I1017 19:29:25.137142  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:25.137197  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.141057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.144695  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:25.144771  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:25.171838  306747 cri.go:89] found id: ""
	I1017 19:29:25.171861  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.171870  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:25.171876  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:25.171935  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:25.204227  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:25.204251  306747 cri.go:89] found id: ""
	I1017 19:29:25.204259  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:25.204312  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:25.208502  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:25.208632  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:25.234929  306747 cri.go:89] found id: ""
	I1017 19:29:25.235003  306747 logs.go:282] 0 containers: []
	W1017 19:29:25.235020  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:25.235030  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:25.235043  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:25.272163  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:25.272192  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:25.370863  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:25.370900  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:25.411966  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:25.412009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:25.479240  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:25.479276  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:25.506577  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:25.506606  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:25.580671  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:25.580706  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:25.614033  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:25.614061  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:25.631893  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:25.631922  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:25.703391  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:25.694870    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.695646    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697219    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697740    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.699431    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:25.694870    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.695646    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697219    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.697740    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:25.699431    8625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:25.703420  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:25.703449  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:25.729186  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:25.729213  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.281561  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:28.292670  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:28.292764  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:28.321689  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:28.321709  306747 cri.go:89] found id: ""
	I1017 19:29:28.321718  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:28.321791  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.325401  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:28.325491  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:28.353611  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.353636  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:28.353642  306747 cri.go:89] found id: ""
	I1017 19:29:28.353649  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:28.353708  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.357789  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.361132  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:28.361209  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:28.388364  306747 cri.go:89] found id: ""
	I1017 19:29:28.388392  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.388401  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:28.388408  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:28.388471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:28.414080  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:28.414105  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:28.414111  306747 cri.go:89] found id: ""
	I1017 19:29:28.414119  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:28.414176  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.417894  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.421494  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:28.421617  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:28.448583  306747 cri.go:89] found id: ""
	I1017 19:29:28.448611  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.448620  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:28.448626  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:28.448683  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:28.481175  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:28.481198  306747 cri.go:89] found id: ""
	I1017 19:29:28.481208  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:28.481262  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:28.485099  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:28.485212  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:28.511543  306747 cri.go:89] found id: ""
	I1017 19:29:28.511569  306747 logs.go:282] 0 containers: []
	W1017 19:29:28.511577  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:28.511586  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:28.511617  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:28.606473  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:28.606511  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:28.626545  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:28.626577  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:28.697168  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:28.689422    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.690138    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.691704    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.692016    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.693514    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:28.689422    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.690138    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.691704    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.692016    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:28.693514    8717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:28.697191  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:28.697204  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:28.750046  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:28.750080  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:28.818139  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:28.818172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:28.847832  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:28.847916  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:28.928453  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:28.928489  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:28.959160  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:28.959188  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:28.986346  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:28.986374  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:29.037329  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:29.037364  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:31.569631  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:31.580386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:31.580488  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:31.606748  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:31.606776  306747 cri.go:89] found id: ""
	I1017 19:29:31.606786  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:31.606861  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.610709  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:31.610808  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:31.637721  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:31.637742  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:31.637747  306747 cri.go:89] found id: ""
	I1017 19:29:31.637754  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:31.637831  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.641550  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.644918  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:31.644994  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:31.671222  306747 cri.go:89] found id: ""
	I1017 19:29:31.671248  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.671257  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:31.671263  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:31.671320  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:31.698318  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:31.698341  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:31.698347  306747 cri.go:89] found id: ""
	I1017 19:29:31.698354  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:31.698409  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.702033  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.705305  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:31.705406  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:31.733910  306747 cri.go:89] found id: ""
	I1017 19:29:31.733940  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.733949  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:31.733956  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:31.734012  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:31.759712  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:31.759743  306747 cri.go:89] found id: ""
	I1017 19:29:31.759752  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:31.759802  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:31.763496  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:31.763571  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:31.789631  306747 cri.go:89] found id: ""
	I1017 19:29:31.789656  306747 logs.go:282] 0 containers: []
	W1017 19:29:31.789665  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:31.789684  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:31.789701  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:31.907913  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:31.907961  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:31.927231  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:31.927316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:32.018355  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:32.018394  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:32.062156  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:32.062194  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:32.153927  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:32.153962  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:32.187982  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:32.188010  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:32.258773  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:32.251239    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.251763    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253326    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253710    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.255187    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:32.251239    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.251763    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253326    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.253710    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:32.255187    8888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:32.258796  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:32.258835  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:32.290660  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:32.290689  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:32.368997  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:32.369029  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:32.400957  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:32.400988  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:34.933742  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:34.945067  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:34.945160  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:34.975919  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:34.975944  306747 cri.go:89] found id: ""
	I1017 19:29:34.975952  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:34.976011  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:34.979876  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:34.979963  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:35.007426  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:35.007451  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:35.007456  306747 cri.go:89] found id: ""
	I1017 19:29:35.007464  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:35.007526  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.013588  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.018178  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:35.018277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:35.048204  306747 cri.go:89] found id: ""
	I1017 19:29:35.048239  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.048248  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:35.048255  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:35.048315  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:35.083329  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:35.083352  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:35.083358  306747 cri.go:89] found id: ""
	I1017 19:29:35.083366  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:35.083430  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.088406  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.094362  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:35.094435  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:35.125078  306747 cri.go:89] found id: ""
	I1017 19:29:35.125160  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.125185  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:35.125198  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:35.125277  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:35.153519  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:35.153543  306747 cri.go:89] found id: ""
	I1017 19:29:35.153552  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:35.153605  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:35.157388  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:35.157485  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:35.189018  306747 cri.go:89] found id: ""
	I1017 19:29:35.189086  306747 logs.go:282] 0 containers: []
	W1017 19:29:35.189113  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:35.189142  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:35.189185  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:35.290719  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:35.290763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:35.310771  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:35.310803  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:35.386443  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:35.376912    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.377784    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379400    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379730    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.381228    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:35.376912    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.377784    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379400    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.379730    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:35.381228    8997 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:35.386470  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:35.386484  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:35.442234  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:35.442274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:35.480866  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:35.480896  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:35.549288  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:35.549326  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:35.576073  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:35.576102  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:35.611273  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:35.611308  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:35.639731  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:35.639763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:35.671118  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:35.671148  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:38.244668  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:38.257170  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:38.257244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:38.283218  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:38.283238  306747 cri.go:89] found id: ""
	I1017 19:29:38.283247  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:38.283305  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.287299  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:38.287365  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:38.314528  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:38.314550  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:38.314555  306747 cri.go:89] found id: ""
	I1017 19:29:38.314563  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:38.314614  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.318298  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.321948  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:38.322042  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:38.349464  306747 cri.go:89] found id: ""
	I1017 19:29:38.349503  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.349516  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:38.349538  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:38.349626  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:38.379503  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:38.379565  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:38.379583  306747 cri.go:89] found id: ""
	I1017 19:29:38.379608  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:38.379675  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.383360  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.387192  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:38.387298  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:38.421165  306747 cri.go:89] found id: ""
	I1017 19:29:38.421190  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.421199  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:38.421205  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:38.421293  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:38.449443  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:38.449509  306747 cri.go:89] found id: ""
	I1017 19:29:38.449530  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:38.449608  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:38.453406  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:38.453530  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:38.480577  306747 cri.go:89] found id: ""
	I1017 19:29:38.480640  306747 logs.go:282] 0 containers: []
	W1017 19:29:38.480662  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:38.480687  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:38.480712  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:38.558339  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:38.558375  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:38.588992  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:38.589018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:38.688443  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:38.688478  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:38.705940  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:38.706012  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:38.738810  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:38.738836  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:38.765665  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:38.765693  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:38.841021  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:38.831886    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.832670    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.834636    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.835450    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.837074    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:38.831886    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.832670    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.834636    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.835450    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:38.837074    9164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:38.841095  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:38.841115  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:38.870763  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:38.870791  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:38.943129  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:38.943162  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:38.984504  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:38.984583  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.577128  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:41.588152  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:41.588230  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:41.616214  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:41.616251  306747 cri.go:89] found id: ""
	I1017 19:29:41.616261  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:41.616333  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.620228  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:41.620301  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:41.647140  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:41.647166  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:41.647172  306747 cri.go:89] found id: ""
	I1017 19:29:41.647180  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:41.647241  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.650918  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.654626  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:41.654701  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:41.680974  306747 cri.go:89] found id: ""
	I1017 19:29:41.680999  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.681008  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:41.681014  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:41.681071  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:41.707036  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.707071  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:41.707076  306747 cri.go:89] found id: ""
	I1017 19:29:41.707084  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:41.707137  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.710947  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.714920  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:41.715001  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:41.741927  306747 cri.go:89] found id: ""
	I1017 19:29:41.741952  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.741962  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:41.741968  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:41.742026  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:41.766904  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:41.766928  306747 cri.go:89] found id: ""
	I1017 19:29:41.766936  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:41.766989  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:41.770640  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:41.770722  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:41.797979  306747 cri.go:89] found id: ""
	I1017 19:29:41.798007  306747 logs.go:282] 0 containers: []
	W1017 19:29:41.798017  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:41.798026  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:41.798038  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:41.815570  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:41.815602  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:41.872205  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:41.872246  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:41.910906  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:41.910942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:41.996670  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:41.996709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:42.033766  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:42.033804  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:42.143006  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:42.143055  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:42.258670  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:42.246629    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.247190    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.249238    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.250318    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.251136    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:42.246629    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.247190    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.249238    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.250318    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:42.251136    9310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:42.258694  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:42.258709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:42.294390  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:42.294422  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:42.328168  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:42.328202  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:42.357875  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:42.357932  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:44.934951  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:44.945451  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:44.945522  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:44.979178  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:44.979201  306747 cri.go:89] found id: ""
	I1017 19:29:44.979209  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:44.979263  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:44.983046  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:44.983126  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:45.035414  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:45.035438  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:45.035443  306747 cri.go:89] found id: ""
	I1017 19:29:45.035451  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:45.035519  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.048433  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.053636  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:45.053716  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:45.120373  306747 cri.go:89] found id: ""
	I1017 19:29:45.120397  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.120406  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:45.120414  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:45.120482  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:45.167585  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:45.167667  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:45.167692  306747 cri.go:89] found id: ""
	I1017 19:29:45.167719  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:45.167819  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.173369  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.178434  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:45.178531  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:45.220087  306747 cri.go:89] found id: ""
	I1017 19:29:45.220115  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.220125  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:45.220132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:45.220222  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:45.275433  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:45.275475  306747 cri.go:89] found id: ""
	I1017 19:29:45.275484  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:45.275559  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:45.281184  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:45.281323  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:45.323004  306747 cri.go:89] found id: ""
	I1017 19:29:45.323106  306747 logs.go:282] 0 containers: []
	W1017 19:29:45.323137  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:45.323188  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:45.323238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:45.371491  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:45.371598  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:45.464170  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:45.455221    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.456745    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.457962    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.458630    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.460252    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:45.455221    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.456745    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.457962    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.458630    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:45.460252    9408 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:45.464194  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:45.464206  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:45.499416  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:45.499445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:45.536994  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:45.537028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:45.615136  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:45.615172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:45.720244  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:45.720281  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:45.778577  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:45.778610  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:45.859732  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:45.859813  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:45.896812  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:45.896889  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:45.929734  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:45.929763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:48.461978  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:48.472688  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:48.472759  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:48.499995  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:48.500019  306747 cri.go:89] found id: ""
	I1017 19:29:48.500028  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:48.500084  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.504256  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:48.504330  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:48.533568  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:48.533627  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:48.533647  306747 cri.go:89] found id: ""
	I1017 19:29:48.533662  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:48.533722  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.538269  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.542307  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:48.542388  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:48.572286  306747 cri.go:89] found id: ""
	I1017 19:29:48.572355  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.572379  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:48.572405  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:48.572499  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:48.599218  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:48.599246  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:48.599251  306747 cri.go:89] found id: ""
	I1017 19:29:48.599259  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:48.599310  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.603036  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.606361  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:48.606471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:48.631930  306747 cri.go:89] found id: ""
	I1017 19:29:48.631966  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.631975  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:48.631982  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:48.632052  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:48.658684  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:48.658711  306747 cri.go:89] found id: ""
	I1017 19:29:48.658720  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:48.658773  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:48.662512  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:48.662586  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:48.688997  306747 cri.go:89] found id: ""
	I1017 19:29:48.689022  306747 logs.go:282] 0 containers: []
	W1017 19:29:48.689031  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:48.689041  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:48.689052  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:48.789868  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:48.789919  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:48.860960  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:48.850451    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.851072    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852664    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852967    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.854822    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:48.850451    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.851072    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852664    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.852967    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:48.854822    9545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:48.860984  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:48.861000  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:48.933293  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:48.933334  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:48.961662  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:48.961692  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:48.998503  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:48.998533  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:49.030219  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:49.030292  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:49.048915  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:49.048949  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:49.075217  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:49.075256  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:49.132824  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:49.132859  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:49.166233  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:49.166269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:51.747014  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:51.757581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:51.757655  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:51.783413  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:51.783436  306747 cri.go:89] found id: ""
	I1017 19:29:51.783444  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:51.783499  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.787489  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:51.787553  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:51.815381  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:51.815404  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:51.815408  306747 cri.go:89] found id: ""
	I1017 19:29:51.815415  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:51.815467  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.819345  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.822754  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:51.822830  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:51.863882  306747 cri.go:89] found id: ""
	I1017 19:29:51.863922  306747 logs.go:282] 0 containers: []
	W1017 19:29:51.863931  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:51.863937  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:51.863997  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:51.896342  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:51.896414  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:51.896433  306747 cri.go:89] found id: ""
	I1017 19:29:51.896457  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:51.896574  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.900688  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.905025  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:51.905156  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:51.950302  306747 cri.go:89] found id: ""
	I1017 19:29:51.950325  306747 logs.go:282] 0 containers: []
	W1017 19:29:51.950333  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:51.950339  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:51.950408  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:51.984143  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:51.984164  306747 cri.go:89] found id: ""
	I1017 19:29:51.984172  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:51.984225  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:51.988312  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:51.988387  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:52.024692  306747 cri.go:89] found id: ""
	I1017 19:29:52.024720  306747 logs.go:282] 0 containers: []
	W1017 19:29:52.024729  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:52.024738  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:52.024750  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:52.043591  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:52.043708  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:52.083962  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:52.084045  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:52.156858  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:52.149368    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.149750    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151218    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151521    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.152949    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:52.149368    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.149750    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151218    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.151521    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:52.152949    9698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:52.156879  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:52.156894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:52.183367  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:52.183396  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:52.244364  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:52.244445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:52.277850  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:52.277883  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:52.363433  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:52.363473  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:52.392573  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:52.392602  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:52.421470  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:52.421499  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:52.502975  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:52.503014  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:55.106386  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:55.118281  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:55.118357  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:55.147588  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:55.147612  306747 cri.go:89] found id: ""
	I1017 19:29:55.147625  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:55.147679  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.151460  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:55.151530  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:55.179417  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:55.179441  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:55.179447  306747 cri.go:89] found id: ""
	I1017 19:29:55.179455  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:55.179512  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.184062  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.187762  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:55.187876  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:55.214159  306747 cri.go:89] found id: ""
	I1017 19:29:55.214187  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.214196  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:55.214203  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:55.214268  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:55.244963  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:55.244987  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:55.244992  306747 cri.go:89] found id: ""
	I1017 19:29:55.244999  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:55.245052  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.250157  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.256061  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:55.256151  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:55.287091  306747 cri.go:89] found id: ""
	I1017 19:29:55.287114  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.287122  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:55.287128  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:55.287192  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:55.316175  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:55.316245  306747 cri.go:89] found id: ""
	I1017 19:29:55.316268  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:55.316359  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:55.321292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:55.321374  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:55.348125  306747 cri.go:89] found id: ""
	I1017 19:29:55.348151  306747 logs.go:282] 0 containers: []
	W1017 19:29:55.348160  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:55.348169  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:55.348181  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:55.380783  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:55.380812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:55.414351  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:55.414386  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:55.484774  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:55.475182    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.476192    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478010    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478543    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.480183    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:55.475182    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.476192    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478010    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.478543    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:55.480183    9835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:55.484796  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:55.484809  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:55.556984  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:55.557018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:55.625177  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:55.625251  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:55.655370  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:55.655398  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:55.680829  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:55.680860  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:55.763300  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:55.763331  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:55.803920  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:55.803954  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:55.900738  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:55.900773  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:58.422801  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:29:58.433443  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:29:58.433516  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:29:58.464116  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:58.464136  306747 cri.go:89] found id: ""
	I1017 19:29:58.464144  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:29:58.464212  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.468047  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:29:58.468169  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:29:58.494945  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:29:58.494979  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:58.494985  306747 cri.go:89] found id: ""
	I1017 19:29:58.494993  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:29:58.495058  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.498896  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.502320  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:29:58.502386  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:29:58.531527  306747 cri.go:89] found id: ""
	I1017 19:29:58.531550  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.531558  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:29:58.531564  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:29:58.531623  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:29:58.558316  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:58.558337  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:58.558342  306747 cri.go:89] found id: ""
	I1017 19:29:58.558350  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:29:58.558403  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.562311  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.565856  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:29:58.565960  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:29:58.591130  306747 cri.go:89] found id: ""
	I1017 19:29:58.591156  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.591164  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:29:58.591173  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:29:58.591229  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:29:58.618142  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:58.618221  306747 cri.go:89] found id: ""
	I1017 19:29:58.618237  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:29:58.618297  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:29:58.621817  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:29:58.621888  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:29:58.651258  306747 cri.go:89] found id: ""
	I1017 19:29:58.651284  306747 logs.go:282] 0 containers: []
	W1017 19:29:58.651293  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:29:58.651302  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:29:58.651315  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:29:58.720909  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:29:58.720942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:29:58.748703  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:29:58.748729  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:29:58.776433  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:29:58.776463  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:29:58.851007  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:29:58.851041  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:29:58.884351  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:29:58.884382  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:29:58.957941  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:29:58.949361    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.950154    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.951742    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.952330    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.954025    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:29:58.949361    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.950154    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.951742    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.952330    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:29:58.954025    9993 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:29:58.957961  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:29:58.957974  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:29:58.987459  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:29:58.987531  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:29:59.026978  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:29:59.027008  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:29:59.128822  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:29:59.128858  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:29:59.146047  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:29:59.146079  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:01.705070  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:01.718647  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:01.718748  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:01.753347  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:01.753387  306747 cri.go:89] found id: ""
	I1017 19:30:01.753395  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:01.753457  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.757741  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:01.757850  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:01.786783  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:01.786861  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:01.786873  306747 cri.go:89] found id: ""
	I1017 19:30:01.786882  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:01.787029  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.791549  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.796677  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:01.796752  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:01.826434  306747 cri.go:89] found id: ""
	I1017 19:30:01.826462  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.826472  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:01.826478  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:01.826543  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:01.863544  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:01.863569  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:01.863574  306747 cri.go:89] found id: ""
	I1017 19:30:01.863582  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:01.863639  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.867992  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.872125  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:01.872206  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:01.908249  306747 cri.go:89] found id: ""
	I1017 19:30:01.908276  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.908285  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:01.908292  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:01.908354  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:01.936971  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:01.937001  306747 cri.go:89] found id: ""
	I1017 19:30:01.937010  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:01.937105  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:01.941357  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:01.941426  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:01.982542  306747 cri.go:89] found id: ""
	I1017 19:30:01.982569  306747 logs.go:282] 0 containers: []
	W1017 19:30:01.982578  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:01.982593  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:01.982606  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:02.018942  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:02.018970  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:02.099513  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:02.099556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:02.137502  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:02.137532  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:02.185697  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:02.185738  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:02.288795  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:02.288835  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:02.336210  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:02.336248  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:02.422878  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:02.422917  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:02.453635  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:02.453662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:02.540123  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:02.540164  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:02.558457  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:02.558491  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:02.629161  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:02.619096   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.619981   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.621652   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.622279   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.624619   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:02.619096   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.619981   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.621652   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.622279   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:02.624619   10164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:05.130448  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:05.144120  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:05.144214  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:05.175291  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:05.175324  306747 cri.go:89] found id: ""
	I1017 19:30:05.175334  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:05.175394  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.179428  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:05.179514  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:05.212486  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:05.212511  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:05.212541  306747 cri.go:89] found id: ""
	I1017 19:30:05.212550  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:05.212606  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.216463  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.220220  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:05.220295  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:05.249597  306747 cri.go:89] found id: ""
	I1017 19:30:05.249624  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.249633  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:05.249640  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:05.249706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:05.276856  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:05.276878  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:05.276883  306747 cri.go:89] found id: ""
	I1017 19:30:05.276890  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:05.276945  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.280586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.284132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:05.284196  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:05.312051  306747 cri.go:89] found id: ""
	I1017 19:30:05.312081  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.312090  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:05.312096  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:05.312154  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:05.339324  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:05.339345  306747 cri.go:89] found id: ""
	I1017 19:30:05.339353  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:05.339406  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:05.343274  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:05.343351  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:05.371042  306747 cri.go:89] found id: ""
	I1017 19:30:05.371067  306747 logs.go:282] 0 containers: []
	W1017 19:30:05.371076  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:05.371086  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:05.371103  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:05.395923  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:05.395957  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:05.453746  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:05.453785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:05.495400  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:05.495436  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:05.522354  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:05.522384  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:05.603168  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:05.603203  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:05.635130  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:05.635158  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:05.730159  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:05.730196  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:05.805436  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:05.797321   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.798191   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.799878   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.800180   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.801717   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:05.797321   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.798191   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.799878   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.800180   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:05.801717   10279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:05.805458  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:05.805471  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:05.831415  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:05.831453  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:05.915270  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:05.915309  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:08.445553  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:08.457157  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:08.457224  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:08.489306  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:08.489335  306747 cri.go:89] found id: ""
	I1017 19:30:08.489344  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:08.489399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.493424  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:08.493497  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:08.523021  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:08.523056  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:08.523061  306747 cri.go:89] found id: ""
	I1017 19:30:08.523069  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:08.523133  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.527165  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.530929  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:08.531043  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:08.560240  306747 cri.go:89] found id: ""
	I1017 19:30:08.560266  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.560275  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:08.560282  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:08.560340  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:08.587950  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:08.587974  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:08.587979  306747 cri.go:89] found id: ""
	I1017 19:30:08.587987  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:08.588048  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.591797  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.595627  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:08.595710  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:08.622023  306747 cri.go:89] found id: ""
	I1017 19:30:08.622048  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.622057  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:08.622064  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:08.622123  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:08.652098  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:08.652194  306747 cri.go:89] found id: ""
	I1017 19:30:08.652232  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:08.652399  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:08.657095  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:08.657180  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:08.687380  306747 cri.go:89] found id: ""
	I1017 19:30:08.687404  306747 logs.go:282] 0 containers: []
	W1017 19:30:08.687412  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:08.687421  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:08.687433  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:08.785046  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:08.785084  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:08.815287  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:08.815318  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:08.880972  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:08.881008  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:08.919918  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:08.919947  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:08.994592  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:08.994632  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:09.029806  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:09.029833  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:09.059196  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:09.059224  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:09.077625  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:09.077658  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:09.155722  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:09.147557   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.148286   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.149973   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.150565   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.152238   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:09.147557   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.148286   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.149973   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.150565   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:09.152238   10429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:09.155746  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:09.155759  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:09.230856  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:09.230895  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:11.763218  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:11.774210  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:11.774310  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:11.807759  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:11.807778  306747 cri.go:89] found id: ""
	I1017 19:30:11.807786  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:11.807840  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.812129  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:11.812202  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:11.840430  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:11.840451  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:11.840459  306747 cri.go:89] found id: ""
	I1017 19:30:11.840467  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:11.840562  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.844491  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.848972  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:11.849065  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:11.876962  306747 cri.go:89] found id: ""
	I1017 19:30:11.876986  306747 logs.go:282] 0 containers: []
	W1017 19:30:11.876994  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:11.877000  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:11.877060  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:11.907338  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:11.907402  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:11.907421  306747 cri.go:89] found id: ""
	I1017 19:30:11.907446  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:11.907534  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.911700  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.915708  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:11.915823  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:11.945931  306747 cri.go:89] found id: ""
	I1017 19:30:11.945968  306747 logs.go:282] 0 containers: []
	W1017 19:30:11.945976  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:11.945983  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:11.946041  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:11.973489  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:11.973509  306747 cri.go:89] found id: ""
	I1017 19:30:11.973517  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:11.973582  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:11.979325  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:11.979401  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:12.006387  306747 cri.go:89] found id: ""
	I1017 19:30:12.006415  306747 logs.go:282] 0 containers: []
	W1017 19:30:12.006425  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:12.006437  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:12.006452  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:12.112142  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:12.112180  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:12.130633  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:12.130662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:12.219234  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:12.204079   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.204586   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.208545   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.212324   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.214784   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:12.204079   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.204586   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.208545   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.212324   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:12.214784   10519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:12.219259  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:12.219274  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:12.248889  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:12.248918  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:12.284961  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:12.284995  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:12.360893  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:12.360930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:12.394406  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:12.394433  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:12.420215  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:12.420245  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:12.477947  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:12.477980  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:12.559952  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:12.559989  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:15.098061  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:15.110601  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:15.110673  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:15.142831  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:15.142854  306747 cri.go:89] found id: ""
	I1017 19:30:15.142863  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:15.142922  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.147216  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:15.147336  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:15.177462  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:15.177487  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:15.177492  306747 cri.go:89] found id: ""
	I1017 19:30:15.177500  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:15.177556  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.182001  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.186668  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:15.186752  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:15.218350  306747 cri.go:89] found id: ""
	I1017 19:30:15.218375  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.218383  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:15.218389  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:15.218449  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:15.247656  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:15.247730  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:15.247750  306747 cri.go:89] found id: ""
	I1017 19:30:15.247774  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:15.247847  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.251499  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.254966  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:15.255039  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:15.282034  306747 cri.go:89] found id: ""
	I1017 19:30:15.282056  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.282065  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:15.282071  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:15.282131  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:15.313582  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:15.313643  306747 cri.go:89] found id: ""
	I1017 19:30:15.313665  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:15.313739  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:15.317325  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:15.317407  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:15.343894  306747 cri.go:89] found id: ""
	I1017 19:30:15.343921  306747 logs.go:282] 0 containers: []
	W1017 19:30:15.343937  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:15.343947  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:15.343967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:15.416772  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:15.408215   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.409020   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410494   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410798   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.412827   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:15.408215   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.409020   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410494   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.410798   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:15.412827   10650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:15.416794  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:15.416807  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:15.455991  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:15.456060  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:15.533107  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:15.533144  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:15.605424  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:15.605464  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:15.633544  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:15.633572  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:15.710509  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:15.710545  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:15.744271  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:15.744352  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:15.844584  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:15.844621  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:15.865714  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:15.865745  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:15.910911  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:15.910945  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:18.440664  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:18.451576  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:18.451643  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:18.480927  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:18.480948  306747 cri.go:89] found id: ""
	I1017 19:30:18.480956  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:18.481010  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.484797  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:18.484886  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:18.512958  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:18.513034  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:18.513045  306747 cri.go:89] found id: ""
	I1017 19:30:18.513053  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:18.513106  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.516855  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.520298  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:18.520369  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:18.546427  306747 cri.go:89] found id: ""
	I1017 19:30:18.546453  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.546462  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:18.546468  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:18.546532  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:18.573945  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:18.574007  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:18.574021  306747 cri.go:89] found id: ""
	I1017 19:30:18.574030  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:18.574094  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.577681  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.581276  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:18.581357  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:18.607914  306747 cri.go:89] found id: ""
	I1017 19:30:18.607941  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.607950  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:18.607956  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:18.608013  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:18.634762  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:18.634781  306747 cri.go:89] found id: ""
	I1017 19:30:18.634789  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:18.634842  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:18.638638  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:18.638754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:18.666586  306747 cri.go:89] found id: ""
	I1017 19:30:18.666610  306747 logs.go:282] 0 containers: []
	W1017 19:30:18.666618  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:18.666627  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:18.666639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:18.685607  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:18.685637  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:18.740058  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:18.740088  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:18.816374  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:18.816410  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:18.842654  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:18.842686  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:18.921888  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:18.913390   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.913958   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.915701   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.916258   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.918025   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:18.913390   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.913958   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.915701   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.916258   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:18.918025   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:18.921914  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:18.921930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:18.948267  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:18.948298  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:19.003855  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:19.003894  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:19.033396  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:19.033424  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:19.128308  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:19.128353  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:19.162140  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:19.162166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:21.764178  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:21.775522  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:21.775596  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:21.803342  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:21.803367  306747 cri.go:89] found id: ""
	I1017 19:30:21.803377  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:21.803442  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.807522  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:21.807598  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:21.836696  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:21.836720  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:21.836726  306747 cri.go:89] found id: ""
	I1017 19:30:21.836734  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:21.836789  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.840752  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.844455  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:21.844557  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:21.872104  306747 cri.go:89] found id: ""
	I1017 19:30:21.872131  306747 logs.go:282] 0 containers: []
	W1017 19:30:21.872140  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:21.872147  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:21.872210  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:21.908413  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:21.908439  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:21.908448  306747 cri.go:89] found id: ""
	I1017 19:30:21.908455  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:21.908513  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.912640  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.916402  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:21.916476  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:21.950380  306747 cri.go:89] found id: ""
	I1017 19:30:21.950466  306747 logs.go:282] 0 containers: []
	W1017 19:30:21.950498  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:21.950517  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:21.950628  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:21.983152  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:21.983177  306747 cri.go:89] found id: ""
	I1017 19:30:21.983187  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:21.983243  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:21.986962  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:21.987037  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:22.019909  306747 cri.go:89] found id: ""
	I1017 19:30:22.019935  306747 logs.go:282] 0 containers: []
	W1017 19:30:22.019944  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:22.019953  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:22.019996  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:22.069135  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:22.069175  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:22.103886  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:22.103916  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:22.133109  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:22.133136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:22.215579  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:22.215617  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:22.297981  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:22.289181   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.289836   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291072   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291590   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.293032   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:22.289181   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.289836   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291072   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.291590   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:22.293032   10949 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:22.298003  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:22.298017  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:22.373102  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:22.373140  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:22.406083  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:22.406110  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:22.506621  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:22.506659  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:22.526268  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:22.526299  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:22.557755  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:22.557784  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.116647  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:25.128310  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:25.128412  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:25.158258  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:25.158281  306747 cri.go:89] found id: ""
	I1017 19:30:25.158293  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:25.158358  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.162693  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:25.162773  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:25.197276  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.197301  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:25.197307  306747 cri.go:89] found id: ""
	I1017 19:30:25.197315  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:25.197407  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.201342  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.205350  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:25.205422  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:25.233590  306747 cri.go:89] found id: ""
	I1017 19:30:25.233617  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.233627  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:25.233634  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:25.233693  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:25.260459  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:25.260486  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:25.260492  306747 cri.go:89] found id: ""
	I1017 19:30:25.260500  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:25.260582  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.266116  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.269609  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:25.269709  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:25.299945  306747 cri.go:89] found id: ""
	I1017 19:30:25.299970  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.299979  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:25.299986  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:25.300062  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:25.327588  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:25.327611  306747 cri.go:89] found id: ""
	I1017 19:30:25.327619  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:25.327695  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:25.331614  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:25.331714  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:25.360945  306747 cri.go:89] found id: ""
	I1017 19:30:25.360969  306747 logs.go:282] 0 containers: []
	W1017 19:30:25.360978  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:25.360987  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:25.361018  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:25.419332  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:25.419371  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:25.455422  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:25.455454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:25.533420  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:25.533454  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:25.561277  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:25.561303  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:25.589003  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:25.589032  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:25.667191  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:25.667225  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:25.697081  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:25.697108  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:25.796723  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:25.796756  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:25.817825  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:25.817854  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:25.895602  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:25.887039   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.887933   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.889709   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.890373   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.891870   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:25.887039   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.887933   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.889709   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.890373   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:25.891870   11120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:25.895626  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:25.895639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:28.421545  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:28.432472  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:28.432573  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:28.461368  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:28.461391  306747 cri.go:89] found id: ""
	I1017 19:30:28.461400  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:28.461454  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.466145  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:28.466221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:28.496790  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:28.496814  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:28.496822  306747 cri.go:89] found id: ""
	I1017 19:30:28.496830  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:28.496886  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.500588  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.504150  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:28.504250  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:28.530114  306747 cri.go:89] found id: ""
	I1017 19:30:28.530141  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.530150  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:28.530157  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:28.530257  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:28.560630  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:28.560660  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:28.560675  306747 cri.go:89] found id: ""
	I1017 19:30:28.560684  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:28.560737  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.564422  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.568093  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:28.568165  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:28.598927  306747 cri.go:89] found id: ""
	I1017 19:30:28.598954  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.598963  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:28.598969  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:28.599075  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:28.625977  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:28.626001  306747 cri.go:89] found id: ""
	I1017 19:30:28.626010  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:28.626090  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:28.629847  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:28.629929  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:28.656469  306747 cri.go:89] found id: ""
	I1017 19:30:28.656494  306747 logs.go:282] 0 containers: []
	W1017 19:30:28.656503  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:28.656513  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:28.656548  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:28.758826  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:28.758863  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:28.778387  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:28.778416  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:28.845382  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:28.837571   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.838156   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.839753   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.840320   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.841429   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:28.837571   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.838156   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.839753   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.840320   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:28.841429   11207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:28.845407  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:28.845420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:28.889092  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:28.889167  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:28.970950  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:28.970986  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:29.003996  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:29.004028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:29.064888  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:29.064926  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:29.105700  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:29.105729  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:29.141040  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:29.141066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:29.224674  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:29.224710  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:31.757505  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:31.767848  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:31.767914  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:31.800059  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:31.800082  306747 cri.go:89] found id: ""
	I1017 19:30:31.800093  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:31.800147  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.803723  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:31.803795  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:31.830502  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:31.830525  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:31.830530  306747 cri.go:89] found id: ""
	I1017 19:30:31.830546  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:31.830600  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.834866  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.838218  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:31.838293  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:31.866917  306747 cri.go:89] found id: ""
	I1017 19:30:31.866944  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.866953  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:31.866960  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:31.867015  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:31.898652  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:31.898673  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:31.898679  306747 cri.go:89] found id: ""
	I1017 19:30:31.898692  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:31.898745  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.902404  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.905916  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:31.906005  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:31.936988  306747 cri.go:89] found id: ""
	I1017 19:30:31.937055  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.937080  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:31.937103  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:31.937192  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:31.965478  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:31.965506  306747 cri.go:89] found id: ""
	I1017 19:30:31.965515  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:31.965570  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:31.969541  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:31.969611  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:31.997913  306747 cri.go:89] found id: ""
	I1017 19:30:31.997936  306747 logs.go:282] 0 containers: []
	W1017 19:30:31.997945  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:31.997954  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:31.997967  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:32.075635  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:32.076176  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:32.124512  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:32.124607  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:32.203895  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:32.203930  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:32.237712  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:32.237745  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:32.265784  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:32.265812  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:32.296288  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:32.296316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:32.413833  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:32.413869  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:32.431287  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:32.431316  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:32.496198  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:32.487969   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.488616   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490480   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490935   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.492578   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:32.487969   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.488616   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490480   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.490935   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:32.492578   11389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:32.496222  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:32.496238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:32.522527  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:32.522556  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.098806  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:35.114025  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:35.114098  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:35.150192  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:35.150215  306747 cri.go:89] found id: ""
	I1017 19:30:35.150224  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:35.150291  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.154431  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:35.154528  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:35.187248  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.187274  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:35.187280  306747 cri.go:89] found id: ""
	I1017 19:30:35.187288  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:35.187342  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.190988  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.194467  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:35.194544  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:35.226183  306747 cri.go:89] found id: ""
	I1017 19:30:35.226209  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.226228  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:35.226277  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:35.226345  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:35.254492  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:35.254514  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:35.254532  306747 cri.go:89] found id: ""
	I1017 19:30:35.254542  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:35.254600  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.258515  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.262160  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:35.262245  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:35.290479  306747 cri.go:89] found id: ""
	I1017 19:30:35.290556  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.290573  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:35.290581  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:35.290647  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:35.320673  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:35.320696  306747 cri.go:89] found id: ""
	I1017 19:30:35.320705  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:35.320760  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:35.324577  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:35.324650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:35.351615  306747 cri.go:89] found id: ""
	I1017 19:30:35.351643  306747 logs.go:282] 0 containers: []
	W1017 19:30:35.351652  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:35.351662  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:35.351674  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:35.426069  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:35.414413   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.418263   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419343   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419972   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.421885   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:35.414413   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.418263   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419343   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.419972   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:35.421885   11474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:35.426092  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:35.426105  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:35.458415  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:35.458445  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:35.532727  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:35.532763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:35.570789  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:35.570821  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:35.654656  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:35.654691  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:35.682337  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:35.682368  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:35.783217  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:35.783263  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:35.809044  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:35.809075  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:35.836181  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:35.836213  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:35.922975  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:35.923013  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:38.460477  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:38.471359  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:38.471462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:38.500899  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:38.500923  306747 cri.go:89] found id: ""
	I1017 19:30:38.500932  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:38.501005  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.505166  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:38.505244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:38.531743  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:38.531766  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:38.531771  306747 cri.go:89] found id: ""
	I1017 19:30:38.531779  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:38.531842  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.535645  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.539501  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:38.539580  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:38.568890  306747 cri.go:89] found id: ""
	I1017 19:30:38.568915  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.568923  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:38.568929  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:38.568989  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:38.594452  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:38.594476  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:38.594482  306747 cri.go:89] found id: ""
	I1017 19:30:38.594490  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:38.594544  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.598456  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.606409  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:38.606483  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:38.632993  306747 cri.go:89] found id: ""
	I1017 19:30:38.633015  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.633024  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:38.633030  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:38.633091  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:38.659776  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:38.659800  306747 cri.go:89] found id: ""
	I1017 19:30:38.659809  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:38.659861  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:38.663404  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:38.663507  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:38.688978  306747 cri.go:89] found id: ""
	I1017 19:30:38.689003  306747 logs.go:282] 0 containers: []
	W1017 19:30:38.689012  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:38.689021  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:38.689033  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:38.722471  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:38.722497  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:38.800538  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:38.800575  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:38.832423  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:38.832451  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:38.939609  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:38.939648  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:38.959665  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:38.959701  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:39.039314  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:39.030321   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.030924   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.032747   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.033627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.034935   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:39.030321   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.030924   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.032747   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.033627   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:39.034935   11641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:39.039340  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:39.039355  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:39.113637  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:39.113709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:39.148504  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:39.148662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:39.223019  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:39.223056  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:39.253605  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:39.253635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:41.780640  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:41.791876  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:41.791949  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:41.819510  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:41.819583  306747 cri.go:89] found id: ""
	I1017 19:30:41.819606  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:41.819691  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.824390  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:41.824462  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:41.856605  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:41.856636  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:41.856642  306747 cri.go:89] found id: ""
	I1017 19:30:41.856649  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:41.856715  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.864466  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.868588  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:41.868666  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:41.903466  306747 cri.go:89] found id: ""
	I1017 19:30:41.903498  306747 logs.go:282] 0 containers: []
	W1017 19:30:41.903507  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:41.903514  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:41.903571  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:41.930657  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:41.930682  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:41.930687  306747 cri.go:89] found id: ""
	I1017 19:30:41.930694  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:41.930749  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.934754  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.938781  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:41.938871  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:41.968280  306747 cri.go:89] found id: ""
	I1017 19:30:41.968306  306747 logs.go:282] 0 containers: []
	W1017 19:30:41.968315  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:41.968322  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:41.968402  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:41.995850  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:41.995931  306747 cri.go:89] found id: ""
	I1017 19:30:41.995955  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:41.996030  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:41.999630  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:41.999700  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:42.044891  306747 cri.go:89] found id: ""
	I1017 19:30:42.044926  306747 logs.go:282] 0 containers: []
	W1017 19:30:42.044935  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:42.044952  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:42.044971  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:42.174128  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:42.174267  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:42.224381  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:42.224413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:42.333478  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:42.333518  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:42.353368  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:42.353403  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:42.391604  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:42.391635  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:42.426317  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:42.426347  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:42.503367  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:42.494794   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.495471   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497096   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497695   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.499206   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:42.494794   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.495471   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497096   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.497695   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:42.499206   11786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:42.503388  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:42.503401  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:42.560324  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:42.560359  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:42.632932  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:42.632968  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:42.665758  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:42.665844  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.196869  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:45.213931  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:45.214024  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:45.259283  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:45.259312  306747 cri.go:89] found id: ""
	I1017 19:30:45.259321  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:45.259390  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.265805  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:45.265913  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:45.316071  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:45.316098  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:45.316103  306747 cri.go:89] found id: ""
	I1017 19:30:45.316112  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:45.316178  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.329246  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.342518  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:45.342722  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:45.403649  306747 cri.go:89] found id: ""
	I1017 19:30:45.403681  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.403691  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:45.403700  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:45.403771  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:45.436373  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:45.436398  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:45.436404  306747 cri.go:89] found id: ""
	I1017 19:30:45.436412  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:45.436470  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.442171  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.446282  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:45.446378  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:45.480185  306747 cri.go:89] found id: ""
	I1017 19:30:45.480211  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.480269  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:45.480281  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:45.480348  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:45.519821  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.519845  306747 cri.go:89] found id: ""
	I1017 19:30:45.519853  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:45.519916  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:45.523961  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:45.524044  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:45.553268  306747 cri.go:89] found id: ""
	I1017 19:30:45.553295  306747 logs.go:282] 0 containers: []
	W1017 19:30:45.553336  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:45.553353  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:45.553376  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:45.581168  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:45.581199  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:45.659459  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:45.659495  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:45.698325  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:45.698356  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:45.730552  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:45.730578  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:45.761205  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:45.761233  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:45.859241  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:45.859345  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:45.879219  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:45.879249  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:45.956579  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:45.956613  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:46.038168  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:46.038207  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:46.088885  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:46.088920  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:46.156435  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:46.147068   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.148033   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.149640   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.150155   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.151669   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:46.147068   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.148033   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.149640   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.150155   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:46.151669   11955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:48.657371  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:48.668345  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:48.668414  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:48.699974  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:48.699994  306747 cri.go:89] found id: ""
	I1017 19:30:48.700002  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:48.700055  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.703706  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:48.703773  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:48.729231  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:48.729255  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:48.729260  306747 cri.go:89] found id: ""
	I1017 19:30:48.729267  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:48.729347  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.733057  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.736560  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:48.736650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:48.769891  306747 cri.go:89] found id: ""
	I1017 19:30:48.769917  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.769925  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:48.769932  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:48.769988  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:48.796614  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:48.796633  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:48.796638  306747 cri.go:89] found id: ""
	I1017 19:30:48.796645  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:48.796697  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.800347  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.803641  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:48.803707  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:48.829352  306747 cri.go:89] found id: ""
	I1017 19:30:48.829375  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.829384  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:48.829390  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:48.829448  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:48.863517  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:48.863542  306747 cri.go:89] found id: ""
	I1017 19:30:48.863551  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:48.863603  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:48.867339  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:48.867411  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:48.896584  306747 cri.go:89] found id: ""
	I1017 19:30:48.896609  306747 logs.go:282] 0 containers: []
	W1017 19:30:48.896618  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:48.896626  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:48.896639  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:48.990111  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:48.990146  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:49.015233  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:49.015265  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:49.040589  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:49.040623  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:49.100203  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:49.100237  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:49.135876  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:49.135909  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:49.168685  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:49.168756  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:49.211941  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:49.212009  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:49.278129  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:49.270279   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.271015   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272492   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272926   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.274542   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:49.270279   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.271015   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272492   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.272926   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:49.274542   12075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:49.278151  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:49.278166  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:49.355582  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:49.355620  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:49.385861  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:49.385888  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:51.961962  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:51.973739  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:51.973839  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:52.007060  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:52.007089  306747 cri.go:89] found id: ""
	I1017 19:30:52.007098  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:52.007173  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.011950  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:52.012025  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:52.043424  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:52.043445  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:52.043450  306747 cri.go:89] found id: ""
	I1017 19:30:52.043458  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:52.043515  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.048102  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.051750  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:52.051836  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:52.091285  306747 cri.go:89] found id: ""
	I1017 19:30:52.091362  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.091384  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:52.091412  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:52.091533  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:52.120853  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:52.120928  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:52.120947  306747 cri.go:89] found id: ""
	I1017 19:30:52.120962  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:52.121037  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.125047  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.128913  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:52.129029  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:52.155112  306747 cri.go:89] found id: ""
	I1017 19:30:52.155138  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.155147  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:52.155153  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:52.155217  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:52.181654  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:52.181678  306747 cri.go:89] found id: ""
	I1017 19:30:52.181686  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:52.181738  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:52.185468  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:52.185538  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:52.210532  306747 cri.go:89] found id: ""
	I1017 19:30:52.210558  306747 logs.go:282] 0 containers: []
	W1017 19:30:52.210567  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:52.210577  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:52.210591  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:52.283758  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:52.283793  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:52.321133  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:52.321172  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:52.349409  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:52.349440  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:52.454035  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:52.454072  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:52.474228  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:52.474336  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:52.549970  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:52.541938   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.542794   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.543926   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.544704   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.546272   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:52.541938   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.542794   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.543926   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.544704   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:52.546272   12197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:52.550045  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:52.550073  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:52.637174  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:52.637221  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:52.668341  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:52.668418  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:52.761051  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:52.761091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:52.792065  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:52.792160  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.319606  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:55.330935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:55.331008  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:55.358717  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.358739  306747 cri.go:89] found id: ""
	I1017 19:30:55.358747  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:55.358802  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.362654  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:55.362769  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:55.397277  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:55.397301  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:55.397306  306747 cri.go:89] found id: ""
	I1017 19:30:55.397314  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:55.397368  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.401240  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.405131  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:55.405244  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:55.432480  306747 cri.go:89] found id: ""
	I1017 19:30:55.432602  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.432627  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:55.432666  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:55.432750  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:55.465240  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:55.465314  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:55.465333  306747 cri.go:89] found id: ""
	I1017 19:30:55.465357  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:55.465448  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.469415  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.473023  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:55.473096  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:55.499608  306747 cri.go:89] found id: ""
	I1017 19:30:55.499681  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.499704  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:55.499724  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:55.499814  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:55.526471  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:55.526494  306747 cri.go:89] found id: ""
	I1017 19:30:55.526502  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:55.526586  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:55.530319  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:55.530395  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:55.558617  306747 cri.go:89] found id: ""
	I1017 19:30:55.558639  306747 logs.go:282] 0 containers: []
	W1017 19:30:55.558647  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:55.558656  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:55.558668  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:55.578357  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:55.578390  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:55.642730  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:55.635023   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.635478   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637010   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637409   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.638832   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:55.635023   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.635478   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637010   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.637409   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:55.638832   12306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:55.642749  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:55.642763  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:55.673301  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:55.673329  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:55.735266  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:55.735301  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:55.777444  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:55.777474  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:55.891903  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:55.891985  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:55.976455  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:55.976492  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:56.005202  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:56.005238  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:56.034021  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:56.034049  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:56.086550  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:56.086581  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:58.687094  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:30:58.698343  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:30:58.698420  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:30:58.737082  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:58.737144  306747 cri.go:89] found id: ""
	I1017 19:30:58.737165  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:30:58.737251  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.740769  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:30:58.740830  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:30:58.768900  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:30:58.768920  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:58.768931  306747 cri.go:89] found id: ""
	I1017 19:30:58.768938  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:30:58.768991  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.773597  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.777023  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:30:58.777094  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:30:58.808627  306747 cri.go:89] found id: ""
	I1017 19:30:58.808654  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.808675  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:30:58.808681  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:30:58.808778  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:30:58.833787  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:58.833810  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:58.833815  306747 cri.go:89] found id: ""
	I1017 19:30:58.833823  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:30:58.833902  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.837729  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.841076  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:30:58.841161  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:30:58.876060  306747 cri.go:89] found id: ""
	I1017 19:30:58.876089  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.876099  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:30:58.876107  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:30:58.876189  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:30:58.906434  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:58.906509  306747 cri.go:89] found id: ""
	I1017 19:30:58.906524  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:30:58.906598  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:30:58.911053  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:30:58.911127  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:30:58.936724  306747 cri.go:89] found id: ""
	I1017 19:30:58.936748  306747 logs.go:282] 0 containers: []
	W1017 19:30:58.936757  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:30:58.936765  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:30:58.936776  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:30:59.014607  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:30:59.014643  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:30:59.044576  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:30:59.044655  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:30:59.124177  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:30:59.124211  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:30:59.156709  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:30:59.156737  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:30:59.175384  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:30:59.175413  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:30:59.209100  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:30:59.209136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:30:59.235216  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:30:59.235244  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:30:59.337596  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:30:59.337631  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:30:59.405118  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:30:59.396347   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.396989   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.398679   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.399208   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.400795   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:30:59.396347   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.396989   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.398679   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.399208   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:30:59.400795   12493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:30:59.405140  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:30:59.405153  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:30:59.431225  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:30:59.431255  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.008171  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:02.020307  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:02.020387  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:02.051051  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:02.051079  306747 cri.go:89] found id: ""
	I1017 19:31:02.051099  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:02.051161  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.056015  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:02.056088  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:02.089743  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.089817  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:02.089836  306747 cri.go:89] found id: ""
	I1017 19:31:02.089856  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:02.089943  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.093857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.097708  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:02.097837  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:02.123389  306747 cri.go:89] found id: ""
	I1017 19:31:02.123411  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.123420  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:02.123426  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:02.123483  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:02.150505  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:02.150582  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:02.150596  306747 cri.go:89] found id: ""
	I1017 19:31:02.150605  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:02.150681  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.154543  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.158104  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:02.158177  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:02.186868  306747 cri.go:89] found id: ""
	I1017 19:31:02.186895  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.186904  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:02.186911  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:02.186974  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:02.215359  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:02.215426  306747 cri.go:89] found id: ""
	I1017 19:31:02.215451  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:02.215524  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:02.219153  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:02.219266  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:02.246345  306747 cri.go:89] found id: ""
	I1017 19:31:02.246371  306747 logs.go:282] 0 containers: []
	W1017 19:31:02.246381  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:02.246391  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:02.246402  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:02.280313  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:02.280387  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:02.385786  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:02.385822  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:02.414602  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:02.414679  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:02.492313  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:02.492350  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:02.511027  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:02.511067  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:02.590723  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:02.582016   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.582767   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.584046   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.585740   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.586186   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:02.582016   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.582767   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.584046   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.585740   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:02.586186   12608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:02.590747  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:02.590762  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:02.653228  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:02.653264  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:02.687148  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:02.687183  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:02.790229  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:02.790269  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:02.819586  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:02.819615  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.355439  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:05.367250  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:05.367353  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:05.393587  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:05.393611  306747 cri.go:89] found id: ""
	I1017 19:31:05.393620  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:05.393674  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.397564  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:05.397685  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:05.423815  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:05.423840  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:05.423845  306747 cri.go:89] found id: ""
	I1017 19:31:05.423853  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:05.423921  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.427632  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.431060  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:05.431129  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:05.457152  306747 cri.go:89] found id: ""
	I1017 19:31:05.457176  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.457186  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:05.457192  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:05.457256  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:05.483757  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:05.483779  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:05.483784  306747 cri.go:89] found id: ""
	I1017 19:31:05.483791  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:05.483845  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.487471  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.490789  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:05.490859  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:05.516653  306747 cri.go:89] found id: ""
	I1017 19:31:05.516676  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.516684  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:05.516690  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:05.516793  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:05.542033  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.542059  306747 cri.go:89] found id: ""
	I1017 19:31:05.542091  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:05.542153  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:05.545908  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:05.545978  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:05.571870  306747 cri.go:89] found id: ""
	I1017 19:31:05.571892  306747 logs.go:282] 0 containers: []
	W1017 19:31:05.571901  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:05.571909  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:05.571923  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:05.649030  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:05.639899   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.640483   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642053   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642716   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.644399   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:05.639899   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.640483   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642053   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.642716   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:05.644399   12718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:05.649050  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:05.649062  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:05.677036  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:05.677065  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:05.718764  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:05.718795  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:05.803861  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:05.803897  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:05.835788  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:05.835814  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:05.864823  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:05.864853  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:05.947756  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:05.947788  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:05.979938  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:05.980005  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:06.080355  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:06.080392  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:06.104116  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:06.104145  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:08.667177  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:08.677727  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:08.677793  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:08.704338  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:08.704362  306747 cri.go:89] found id: ""
	I1017 19:31:08.704370  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:08.704422  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.707981  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:08.708049  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:08.733111  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:08.733130  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:08.733135  306747 cri.go:89] found id: ""
	I1017 19:31:08.733142  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:08.733201  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.737039  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.740374  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:08.740480  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:08.768239  306747 cri.go:89] found id: ""
	I1017 19:31:08.768307  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.768338  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:08.768381  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:08.768471  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:08.795436  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:08.795499  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:08.795524  306747 cri.go:89] found id: ""
	I1017 19:31:08.795537  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:08.795609  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.799450  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.803242  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:08.803312  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:08.831323  306747 cri.go:89] found id: ""
	I1017 19:31:08.831348  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.831358  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:08.831364  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:08.831427  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:08.865991  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:08.866014  306747 cri.go:89] found id: ""
	I1017 19:31:08.866022  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:08.866077  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:08.870085  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:08.870174  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:08.905447  306747 cri.go:89] found id: ""
	I1017 19:31:08.905475  306747 logs.go:282] 0 containers: []
	W1017 19:31:08.905483  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:08.905492  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:08.905504  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:08.988463  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:08.988574  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:09.021674  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:09.021711  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:09.050080  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:09.050111  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:09.126939  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:09.126972  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:09.161551  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:09.161580  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:09.179459  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:09.179490  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:09.209038  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:09.209066  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:09.271767  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:09.271810  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:09.373919  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:09.373956  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:09.439533  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:09.431442   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.432120   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.433687   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.434214   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.435793   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:09.431442   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.432120   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.433687   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.434214   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:09.435793   12914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:09.439556  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:09.439570  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:11.978816  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:11.990102  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:11.990174  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:12.023196  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:12.023225  306747 cri.go:89] found id: ""
	I1017 19:31:12.023235  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:12.023302  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.027739  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:12.027832  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:12.055241  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:12.055265  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:12.055270  306747 cri.go:89] found id: ""
	I1017 19:31:12.055278  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:12.055336  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.059592  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.064052  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:12.064121  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:12.103548  306747 cri.go:89] found id: ""
	I1017 19:31:12.103575  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.103584  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:12.103591  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:12.103650  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:12.131971  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:12.131995  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:12.132000  306747 cri.go:89] found id: ""
	I1017 19:31:12.132008  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:12.132063  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.136064  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.139529  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:12.139597  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:12.165954  306747 cri.go:89] found id: ""
	I1017 19:31:12.165977  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.165985  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:12.165991  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:12.166049  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:12.195543  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:12.195568  306747 cri.go:89] found id: ""
	I1017 19:31:12.195577  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:12.195632  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:12.199531  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:12.199603  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:12.225881  306747 cri.go:89] found id: ""
	I1017 19:31:12.225911  306747 logs.go:282] 0 containers: []
	W1017 19:31:12.225920  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:12.225929  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:12.225942  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:12.259524  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:12.259552  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:12.333075  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:12.333112  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:12.363221  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:12.363249  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:12.467386  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:12.467420  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:12.498049  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:12.498077  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:12.577701  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:12.577736  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:12.607614  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:12.607650  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:12.637568  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:12.637597  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:12.717020  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:12.717054  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:12.740140  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:12.740170  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:12.806245  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:12.796625   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.797249   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.799733   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.800324   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.802649   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:12.796625   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.797249   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.799733   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.800324   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:12.802649   13056 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:15.306473  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:15.318959  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:15.319030  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:15.345727  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:15.345823  306747 cri.go:89] found id: ""
	I1017 19:31:15.345847  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:15.345935  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.349860  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:15.349937  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:15.382414  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:15.382437  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:15.382442  306747 cri.go:89] found id: ""
	I1017 19:31:15.382463  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:15.382539  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.386718  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.390470  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:15.390578  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:15.417577  306747 cri.go:89] found id: ""
	I1017 19:31:15.417652  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.417668  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:15.417676  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:15.417743  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:15.445163  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:15.445206  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:15.445211  306747 cri.go:89] found id: ""
	I1017 19:31:15.445220  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:15.445305  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.450196  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.453988  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:15.454058  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:15.479623  306747 cri.go:89] found id: ""
	I1017 19:31:15.479647  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.479655  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:15.479662  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:15.479725  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:15.505913  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:15.505936  306747 cri.go:89] found id: ""
	I1017 19:31:15.505953  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:15.506007  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:15.509808  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:15.509881  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:15.535383  306747 cri.go:89] found id: ""
	I1017 19:31:15.535408  306747 logs.go:282] 0 containers: []
	W1017 19:31:15.535418  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:15.535428  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:15.535440  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:15.561245  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:15.561272  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:15.622736  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:15.622771  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:15.660115  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:15.660150  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:15.758501  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:15.758536  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:15.778239  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:15.778273  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:15.857887  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:15.842831   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.843942   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.845164   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.846077   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.848805   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:15.842831   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.843942   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.845164   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.846077   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:15.848805   13156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:15.857910  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:15.857926  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:15.946523  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:15.946560  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:15.980219  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:15.980245  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:16.013998  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:16.014027  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:16.095391  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:16.095426  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:18.629382  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:18.642985  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:18.643054  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:18.669511  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:18.669532  306747 cri.go:89] found id: ""
	I1017 19:31:18.669541  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:18.669601  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.673633  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:18.673707  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:18.702215  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:18.702239  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:18.702244  306747 cri.go:89] found id: ""
	I1017 19:31:18.702252  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:18.702331  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.709379  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.717482  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:18.717554  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:18.744246  306747 cri.go:89] found id: ""
	I1017 19:31:18.744269  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.744277  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:18.744283  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:18.744337  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:18.770169  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:18.770192  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:18.770197  306747 cri.go:89] found id: ""
	I1017 19:31:18.770205  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:18.770271  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.774060  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.777555  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:18.777624  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:18.804459  306747 cri.go:89] found id: ""
	I1017 19:31:18.804485  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.804494  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:18.804500  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:18.804582  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:18.831698  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:18.831721  306747 cri.go:89] found id: ""
	I1017 19:31:18.831730  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:18.831783  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:18.837132  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:18.837273  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:18.870956  306747 cri.go:89] found id: ""
	I1017 19:31:18.870983  306747 logs.go:282] 0 containers: []
	W1017 19:31:18.870992  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:18.871001  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:18.871012  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:18.986913  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:18.986950  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:19.007461  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:19.007493  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:19.035000  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:19.035029  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:19.116120  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:19.116154  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:19.146274  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:19.146303  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:19.226087  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:19.226126  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:19.274249  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:19.274285  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:19.342797  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:19.333272   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.333919   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.335774   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.336320   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.338756   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:19.333272   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.333919   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.335774   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.336320   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:19.338756   13303 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:19.342824  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:19.342837  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:19.405167  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:19.405241  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:19.437359  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:19.437389  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:21.966216  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:21.977051  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:21.977124  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:22.010370  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:22.010393  306747 cri.go:89] found id: ""
	I1017 19:31:22.010401  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:22.010463  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.014786  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:22.014905  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:22.054881  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:22.054905  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:22.054910  306747 cri.go:89] found id: ""
	I1017 19:31:22.054917  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:22.054974  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.058919  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.062725  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:22.062801  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:22.092827  306747 cri.go:89] found id: ""
	I1017 19:31:22.092910  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.092926  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:22.092935  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:22.093011  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:22.120574  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:22.120597  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:22.120602  306747 cri.go:89] found id: ""
	I1017 19:31:22.120609  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:22.120665  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.124579  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.128240  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:22.128314  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:22.155355  306747 cri.go:89] found id: ""
	I1017 19:31:22.155382  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.155392  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:22.155398  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:22.155457  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:22.182686  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:22.182750  306747 cri.go:89] found id: ""
	I1017 19:31:22.182771  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:22.182857  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:22.186655  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:22.186754  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:22.211995  306747 cri.go:89] found id: ""
	I1017 19:31:22.212020  306747 logs.go:282] 0 containers: []
	W1017 19:31:22.212029  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:22.212038  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:22.212080  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:22.310483  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:22.310518  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:22.376696  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:22.367517   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.368315   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370151   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370790   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.372572   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:22.367517   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.368315   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370151   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.370790   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:22.372572   13398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:22.376758  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:22.376778  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:22.406493  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:22.406521  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:22.425071  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:22.425110  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:22.454385  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:22.454416  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:22.516625  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:22.516662  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:22.551521  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:22.551555  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:22.645961  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:22.645999  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:22.676665  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:22.676691  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:22.757888  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:22.758011  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:25.307695  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:25.318532  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:25.318666  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:25.351844  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:25.351866  306747 cri.go:89] found id: ""
	I1017 19:31:25.351873  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:25.351936  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.355571  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:25.355637  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:25.382616  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:25.382640  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:25.382646  306747 cri.go:89] found id: ""
	I1017 19:31:25.382664  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:25.382717  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.386649  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.390174  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:25.390311  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:25.417606  306747 cri.go:89] found id: ""
	I1017 19:31:25.417630  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.417639  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:25.417645  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:25.417706  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:25.445452  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:25.445475  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:25.445480  306747 cri.go:89] found id: ""
	I1017 19:31:25.445487  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:25.445541  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.449471  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.452872  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:25.452956  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:25.480615  306747 cri.go:89] found id: ""
	I1017 19:31:25.480648  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.480658  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:25.480664  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:25.480732  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:25.507575  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:25.507595  306747 cri.go:89] found id: ""
	I1017 19:31:25.507603  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:25.507669  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:25.512130  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:25.512199  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:25.539371  306747 cri.go:89] found id: ""
	I1017 19:31:25.539441  306747 logs.go:282] 0 containers: []
	W1017 19:31:25.539463  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:25.539488  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:25.539527  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:25.619877  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:25.619914  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:25.638042  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:25.638071  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:25.677301  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:25.677335  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:25.768647  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:25.768682  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:25.808421  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:25.808456  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:25.833684  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:25.833709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:25.930177  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:25.930222  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:25.981992  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:25.982022  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:26.087083  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:26.087123  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:26.158486  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:26.150658   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.151278   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.152877   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.153291   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.154745   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:26.150658   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.151278   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.152877   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.153291   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:26.154745   13590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:26.158506  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:26.158519  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:28.685675  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:28.697159  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:28.697228  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:28.724197  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:28.724223  306747 cri.go:89] found id: ""
	I1017 19:31:28.724231  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:28.724294  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.728163  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:28.728249  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:28.755375  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:28.755400  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:28.755405  306747 cri.go:89] found id: ""
	I1017 19:31:28.755413  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:28.755465  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.759475  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.762827  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:28.762901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:28.788123  306747 cri.go:89] found id: ""
	I1017 19:31:28.788150  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.788159  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:28.788165  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:28.788221  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:28.818579  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:28.818611  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:28.818617  306747 cri.go:89] found id: ""
	I1017 19:31:28.818624  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:28.818677  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.822375  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.825827  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:28.825901  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:28.856344  306747 cri.go:89] found id: ""
	I1017 19:31:28.856371  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.856379  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:28.856386  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:28.856456  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:28.883877  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:28.883901  306747 cri.go:89] found id: ""
	I1017 19:31:28.883909  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:28.883969  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:28.890405  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:28.890482  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:28.919970  306747 cri.go:89] found id: ""
	I1017 19:31:28.919997  306747 logs.go:282] 0 containers: []
	W1017 19:31:28.920007  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:28.920016  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:28.920028  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:28.938590  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:28.938619  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:29.012463  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:29.012502  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:29.051714  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:29.051751  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:29.139864  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:29.139904  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:29.167130  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:29.167157  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:29.244122  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:29.244163  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:29.289243  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:29.289271  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:29.365219  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:29.356772   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.357390   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.358919   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.359407   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.360893   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:29.356772   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.357390   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.358919   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.359407   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:29.360893   13717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:29.365246  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:29.365260  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:29.391983  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:29.392013  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:29.418030  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:29.418136  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:32.016682  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:32.027928  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1017 19:31:32.028056  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1017 19:31:32.057743  306747 cri.go:89] found id: "134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:32.057770  306747 cri.go:89] found id: ""
	I1017 19:31:32.057779  306747 logs.go:282] 1 containers: [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b]
	I1017 19:31:32.057832  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.062215  306747 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1017 19:31:32.062350  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1017 19:31:32.096282  306747 cri.go:89] found id: "da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:32.096359  306747 cri.go:89] found id: "40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:32.096379  306747 cri.go:89] found id: ""
	I1017 19:31:32.096402  306747 logs.go:282] 2 containers: [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9]
	I1017 19:31:32.096490  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.100272  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.104020  306747 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1017 19:31:32.104094  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1017 19:31:32.130658  306747 cri.go:89] found id: ""
	I1017 19:31:32.130684  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.130692  306747 logs.go:284] No container was found matching "coredns"
	I1017 19:31:32.130698  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1017 19:31:32.130785  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1017 19:31:32.158436  306747 cri.go:89] found id: "e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:32.158459  306747 cri.go:89] found id: "565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:32.158464  306747 cri.go:89] found id: ""
	I1017 19:31:32.158472  306747 logs.go:282] 2 containers: [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26]
	I1017 19:31:32.158524  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.162501  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.165977  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1017 19:31:32.166093  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1017 19:31:32.192337  306747 cri.go:89] found id: ""
	I1017 19:31:32.192414  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.192438  306747 logs.go:284] No container was found matching "kube-proxy"
	I1017 19:31:32.192460  306747 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1017 19:31:32.192566  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1017 19:31:32.224591  306747 cri.go:89] found id: "689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:32.224625  306747 cri.go:89] found id: ""
	I1017 19:31:32.224643  306747 logs.go:282] 1 containers: [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d]
	I1017 19:31:32.224699  306747 ssh_runner.go:195] Run: which crictl
	I1017 19:31:32.228992  306747 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1017 19:31:32.229114  306747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1017 19:31:32.263902  306747 cri.go:89] found id: ""
	I1017 19:31:32.263936  306747 logs.go:282] 0 containers: []
	W1017 19:31:32.263945  306747 logs.go:284] No container was found matching "kindnet"
	I1017 19:31:32.263954  306747 logs.go:123] Gathering logs for describe nodes ...
	I1017 19:31:32.263970  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1017 19:31:32.331346  306747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1017 19:31:32.321358   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.322175   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325150   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325743   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.327508   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1017 19:31:32.321358   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.322175   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325150   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.325743   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1017 19:31:32.327508   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1017 19:31:32.331370  306747 logs.go:123] Gathering logs for kube-apiserver [134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b] ...
	I1017 19:31:32.331383  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 134fec16b25670b3c8bb5b8eb943c3000bac8aacf1ca713c5f9601be5f03781b"
	I1017 19:31:32.358344  306747 logs.go:123] Gathering logs for etcd [da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7] ...
	I1017 19:31:32.358372  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 da4ecca5d9133450e5484665c42d918e088f6ced7077579c625f7e89f0d57ac7"
	I1017 19:31:32.419310  306747 logs.go:123] Gathering logs for etcd [40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9] ...
	I1017 19:31:32.419347  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c47d92418e3fb23629a428993001845bdfc1df6afef519cdaa762483ad39a9"
	I1017 19:31:32.462060  306747 logs.go:123] Gathering logs for kube-scheduler [e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24] ...
	I1017 19:31:32.462091  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e8383aede6ec233b1945b0b3824457b6b8f22bbe7d5f8c5bb2f5a3abb159ba24"
	I1017 19:31:32.543672  306747 logs.go:123] Gathering logs for kube-controller-manager [689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d] ...
	I1017 19:31:32.543709  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 689eae2ae488e0f706ad4ca04cc0de1c471a39906f681bc2b8bac0a3975e7f4d"
	I1017 19:31:32.572300  306747 logs.go:123] Gathering logs for CRI-O ...
	I1017 19:31:32.572327  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1017 19:31:32.650752  306747 logs.go:123] Gathering logs for container status ...
	I1017 19:31:32.650785  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1017 19:31:32.687208  306747 logs.go:123] Gathering logs for kubelet ...
	I1017 19:31:32.687239  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1017 19:31:32.785332  306747 logs.go:123] Gathering logs for dmesg ...
	I1017 19:31:32.785370  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1017 19:31:32.804237  306747 logs.go:123] Gathering logs for kube-scheduler [565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26] ...
	I1017 19:31:32.804272  306747 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 565c8c8ec69f57a93ddcfcd79fe39195bff18bd6c6a70875d4f9d82f111fdf26"
	I1017 19:31:35.336200  306747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:31:35.351300  306747 out.go:203] 
	W1017 19:31:35.354294  306747 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1017 19:31:35.354331  306747 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1017 19:31:35.354341  306747 out.go:285] * Related issues:
	W1017 19:31:35.354355  306747 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1017 19:31:35.354368  306747 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1017 19:31:35.357325  306747 out.go:203] 
	
	
	==> CRI-O <==
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.336555027Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.33658308Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.339801184Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:26:12 ha-254035 crio[663]: time="2025-10-17T19:26:12.339831682Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.953037254Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=202e1d64-912a-476c-ba5a-77b37bc42979 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.953839727Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=6205eb3f-5cb1-4748-8710-0ffe69b4490c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.955014194Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-254035/kube-controller-manager" id=081f7878-c585-4466-b2db-1bae5c6893ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.955225536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.961488794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.962588933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.983518924Z" level=info msg="Created container 09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4: kube-system/kube-controller-manager-ha-254035/kube-controller-manager" id=081f7878-c585-4466-b2db-1bae5c6893ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.984251327Z" level=info msg="Starting container: 09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4" id=0d55a9d8-f1b5-40f1-8bd6-984aab4be84b name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:26:27 ha-254035 crio[663]: time="2025-10-17T19:26:27.987082086Z" level=info msg="Started container" PID=1467 containerID=09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4 description=kube-system/kube-controller-manager-ha-254035/kube-controller-manager id=0d55a9d8-f1b5-40f1-8bd6-984aab4be84b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee9f2d44d32377576c274975d42c83c6d10327b8cf9c78d24d11e2f783796a0e
	Oct 17 19:26:29 ha-254035 conmon[1199]: conmon f662d4e90719bc39bd00 <ninfo>: container 1202 exited with status 1
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.433901954Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f8df12f8-0980-4df8-b1a9-6ee17b7f8ffd name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.435915053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ba31ec85-e31e-4fc3-9dcf-e12b08bd6e71 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.441058833Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f9d9837c-aba3-4e03-853d-b95f80acea4f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.441479975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.45712493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.457473179Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fdd046ea9be9a16a63c03510b49257ec82013029fd6bc07010444052d640f8f0/merged/etc/passwd: no such file or directory"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.457519947Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fdd046ea9be9a16a63c03510b49257ec82013029fd6bc07010444052d640f8f0/merged/etc/group: no such file or directory"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.457904732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.498042086Z" level=info msg="Created container faca00e9a381032f2a2a1ca361d6f8261cbb527f61722910f84bf86e69627f22: kube-system/storage-provisioner/storage-provisioner" id=f9d9837c-aba3-4e03-853d-b95f80acea4f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.499778687Z" level=info msg="Starting container: faca00e9a381032f2a2a1ca361d6f8261cbb527f61722910f84bf86e69627f22" id=14304d27-6de8-4811-9a66-8c4d47f3188f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:26:29 ha-254035 crio[663]: time="2025-10-17T19:26:29.503194694Z" level=info msg="Started container" PID=1483 containerID=faca00e9a381032f2a2a1ca361d6f8261cbb527f61722910f84bf86e69627f22 description=kube-system/storage-provisioner/storage-provisioner id=14304d27-6de8-4811-9a66-8c4d47f3188f name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2cae7d5aa8d4e785124a213f6c2cc39a98e7313513ec9ea001c05e6360e2f93
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	faca00e9a3810       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       2                   c2cae7d5aa8d4       storage-provisioner                 kube-system
	09b363cd1ecad       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   5                   ee9f2d44d3237       kube-controller-manager-ha-254035   kube-system
	576cfa798259d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               1                   70bac1a7c5264       kindnet-gzzsg                       kube-system
	9ee89513ed12a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   9b9434e716ce6       coredns-66bc5c9577-wbgc8            kube-system
	758a5862ad867       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   be0fe8edcd6ba       busybox-7b57f96db7-nc6x2            default
	c52f3d12f85be       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                1                   e47d5acf8c94c       kube-proxy-548b2                    kube-system
	f662d4e90719b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Exited              storage-provisioner       1                   c2cae7d5aa8d4       storage-provisioner                 kube-system
	8edb27c8d6015       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   269b656ae24bb       coredns-66bc5c9577-gfklr            kube-system
	8f2e18695e457       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   4                   ee9f2d44d3237       kube-controller-manager-ha-254035   kube-system
	26c8280f98ef8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            2                   5952fd9040500       kube-apiserver-ha-254035            kube-system
	a9f69dd8228df       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            1                   9e4e211817dbb       kube-scheduler-ha-254035            kube-system
	2dc181e1d75c1       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  0                   75776cf83b5c8       kube-vip-ha-254035                  kube-system
	99ffff8c4838d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      1                   d1536a316aa1d       etcd-ha-254035                      kube-system
	b745cb636fe8e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   7 minutes ago       Exited              kube-apiserver            1                   5952fd9040500       kube-apiserver-ha-254035            kube-system
	
	
	==> coredns [8edb27c8d6015a43dc1b4fd9d8f695495a303a3c83de005f1197b1c1420e5d7e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58119 - 23158 "HINFO IN 703179826096282682.4600017575089700098. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025326139s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [9ee89513ed12a83eea9b477aadcc58ed9f5e2d62a017cd43bad27b1118f04b45] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59051 - 49005 "HINFO IN 2456025369292059622.4845573965486641381. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018045022s
	
	
	==> describe nodes <==
	Name:               ha-254035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_17_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:31:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:31:37 +0000   Fri, 17 Oct 2025 19:18:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-254035
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                eadb5c5f-dcbb-485c-aea7-3aa5b951fd9e
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-nc6x2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-gfklr             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-wbgc8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-254035                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-gzzsg                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-254035             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-254035    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-548b2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-254035             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-254035                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m49s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-254035 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           8m37s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientMemory  7m57s (x8 over 7m58s)  kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m57s (x8 over 7m58s)  kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m57s (x8 over 7m58s)  kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m18s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	
	
	Name:               ha-254035-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_18_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:18:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:23:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 19:23:09 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-254035-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6c5e97e0-fa27-407d-a976-b646e8a40ca5
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6xjlp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-254035-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-vss98                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-254035-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-254035-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-b4fr6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-254035-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-254035-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m33s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   Starting                 9m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m14s (x8 over 9m15s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     9m14s (x8 over 9m15s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m14s (x8 over 9m15s)  kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             8m42s                  node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           8m37s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           5m18s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   NodeNotReady             4m28s                  node-controller  Node ha-254035-m02 status is now: NodeNotReady
	
	
	Name:               ha-254035-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_20_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:19:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:23:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 19:21:41 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-254035-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2f343c58-0cc9-444a-bc88-7799c3ff52df
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-979zm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-254035-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-2k9kj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-254035-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-254035-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-k56cv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-254035-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-254035-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        11m    kube-proxy       
	  Normal  RegisteredNode  11m    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  8m37s  node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  RegisteredNode  5m18s  node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal  NodeNotReady    4m28s  node-controller  Node ha-254035-m03 status is now: NodeNotReady
	
	
	Name:               ha-254035-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_21_16_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:21:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:22:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 19:21:57 +0000   Fri, 17 Oct 2025 19:27:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-254035-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                12691412-a8b5-426e-846e-d6161e527ea6
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pwhwv       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-fr5ts    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeReady                9m52s              kubelet          Node ha-254035-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m37s              node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           5m18s              node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeNotReady             4m28s              node-controller  Node ha-254035-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct17 18:30] overlayfs: idmapped layers are currently not supported
	[Oct17 18:31] overlayfs: idmapped layers are currently not supported
	[  +9.357480] overlayfs: idmapped layers are currently not supported
	[Oct17 18:33] overlayfs: idmapped layers are currently not supported
	[  +5.779853] overlayfs: idmapped layers are currently not supported
	[Oct17 18:34] overlayfs: idmapped layers are currently not supported
	[Oct17 18:35] overlayfs: idmapped layers are currently not supported
	[Oct17 18:36] overlayfs: idmapped layers are currently not supported
	[ +20.850590] overlayfs: idmapped layers are currently not supported
	[Oct17 18:38] overlayfs: idmapped layers are currently not supported
	[ +19.812679] overlayfs: idmapped layers are currently not supported
	[Oct17 18:39] overlayfs: idmapped layers are currently not supported
	[ +19.225178] overlayfs: idmapped layers are currently not supported
	[Oct17 18:40] overlayfs: idmapped layers are currently not supported
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 18:57] overlayfs: idmapped layers are currently not supported
	[Oct17 19:03] overlayfs: idmapped layers are currently not supported
	[Oct17 19:04] overlayfs: idmapped layers are currently not supported
	[Oct17 19:17] overlayfs: idmapped layers are currently not supported
	[Oct17 19:18] overlayfs: idmapped layers are currently not supported
	[Oct17 19:19] overlayfs: idmapped layers are currently not supported
	[Oct17 19:21] overlayfs: idmapped layers are currently not supported
	[Oct17 19:22] overlayfs: idmapped layers are currently not supported
	[Oct17 19:23] overlayfs: idmapped layers are currently not supported
	[  +4.119232] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [99ffff8c4838d302fd86aa2def104fc0bc5a061a4b4b00a66b6659be26e84f94] <==
	{"level":"warn","ts":"2025-10-17T19:31:48.982899Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.084219Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.217678Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.226790Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.235698Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.245086Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.260878Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.272955Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.283249Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.286187Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.296475Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.298122Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.305553Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.312999Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.316558Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.319446Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.327704Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.335625Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.345469Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.349974Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.352799Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.356746Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.364212Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.371829Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-17T19:31:49.382954Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:31:49 up  2:14,  0 user,  load average: 1.50, 1.30, 1.27
	Linux ha-254035 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [576cfa798259d8160ac05728f7d414a328778671800ac5aa4b4d45bfd6b32ca7] <==
	I1017 19:31:12.316884       1 main.go:301] handling current node
	I1017 19:31:22.316591       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:31:22.316724       1 main.go:301] handling current node
	I1017 19:31:22.316765       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:31:22.316799       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:31:22.316958       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:31:22.316999       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:31:22.317085       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:31:22.317118       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:31:32.318786       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:31:32.318883       1 main.go:301] handling current node
	I1017 19:31:32.318923       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:31:32.318956       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:31:32.319124       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:31:32.319162       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:31:32.319267       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:31:32.319300       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:31:42.312669       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:31:42.312715       1 main.go:301] handling current node
	I1017 19:31:42.312734       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:31:42.312741       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:31:42.312914       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:31:42.312921       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:31:42.312977       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:31:42.312984       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [26c8280f98ef8d0b35d3d3f933f908e0be045364d9887ae7338e14fc4e4385e4] <==
	I1017 19:25:41.080327       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:25:41.096711       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 19:25:41.096824       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:25:41.097844       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:25:41.175963       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:25:41.240687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:25:41.270984       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	W1017 19:25:41.278063       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1017 19:25:41.280292       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:25:41.288893       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:25:41.289028       1 policy_source.go:240] refreshing policies
	I1017 19:25:41.289185       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:25:41.331450       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:25:41.383818       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:25:41.406733       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1017 19:25:41.413308       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1017 19:25:45.477912       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 19:25:45.579324       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:25:45.579417       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1017 19:25:46.424106       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1017 19:25:47.046652       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1017 19:26:06.426319       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1017 19:27:22.125956       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:27:22.236976       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:27:22.377213       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [b745cb636fe8e12797dbad3808d1af04aa579d4fbd2ba8ac91052e88e1d9594d] <==
	{"level":"warn","ts":"2025-10-17T19:24:55.662540Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662541Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662657Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662764Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fad20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662902Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fad20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663035Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400253bc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663152Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400253bc20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663213Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663271Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40011003c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.663383Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.664911Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665014Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665142Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40016fba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665183Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026141e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665234Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026141e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665283Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002615680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665351Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002b00960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.665456Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027650e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:24:55.662006Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40014c32c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":3,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
	{"level":"warn","ts":"2025-10-17T19:25:01.465860Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001002d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	E1017 19:25:01.465976       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError"
	E1017 19:25:01.466227       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-254035?timeout=10s" auditID="46bb9fa1-62e8-45b2-afdf-459f2b875119"
	E1017 19:25:01.466249       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.626µs" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-254035" result=null
	F1017 19:25:02.365194       1 hooks.go:204] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	{"level":"warn","ts":"2025-10-17T19:25:02.527979Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	
	
	==> kube-controller-manager [09b363cd1ecad740d92d4ebc587ded23344ec9174985137bd42062048a60cec4] <==
	I1017 19:26:31.955042       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:26:31.955150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:26:31.955182       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:26:31.960320       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 19:26:31.964011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:26:31.973631       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:26:31.974067       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 19:26:31.974279       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:26:31.974994       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 19:26:31.975207       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:26:31.975822       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:26:31.976008       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 19:26:31.976066       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:26:31.976280       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:26:31.977778       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 19:26:31.982328       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 19:26:31.982451       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	I1017 19:26:31.985705       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:31.985877       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 19:26:31.996213       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:26:31.999311       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:26:32.005595       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:26:32.011326       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:26:32.011373       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 19:27:22.463777       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	
	
	==> kube-controller-manager [8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a] <==
	I1017 19:25:26.963428       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:25:27.847264       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 19:25:27.847300       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:25:27.848875       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 19:25:27.849078       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1017 19:25:27.849285       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 19:25:27.849330       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 19:25:37.867683       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [c52f3d12f85be9ad9f0f95f3255def1ee473db156fc0776fb80fa92aad03d8c3] <==
	I1017 19:25:59.103590       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:25:59.177968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:25:59.279067       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:25:59.279103       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:25:59.279223       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:25:59.297489       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:25:59.297617       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:25:59.301231       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:25:59.301529       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:25:59.301552       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:25:59.305385       1 config.go:200] "Starting service config controller"
	I1017 19:25:59.305486       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:25:59.305654       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:25:59.305943       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:25:59.306000       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:25:59.306196       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:25:59.307366       1 config.go:309] "Starting node config controller"
	I1017 19:25:59.311349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:25:59.311421       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:25:59.405715       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:25:59.406183       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:25:59.406288       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a9f69dd8228df806b3caf0a6a77814b3035f6624474afd789ff17d36b93becbb] <==
	E1017 19:24:43.700780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:24:44.750268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:24:46.554973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:24:47.376765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 19:24:47.902102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:25:06.878063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:25:07.212761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:25:12.280794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:25:12.456185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:25:13.739609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:25:14.975535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:25:16.328928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:25:18.380682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:25:20.375603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:25:21.123675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:25:21.517709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:25:21.932068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:25:22.080795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:25:22.270841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:25:25.020718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:25:25.490826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:25:28.981572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:25:29.683639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 19:25:35.763654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1017 19:26:13.713049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.312257     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-gfklr_kube-system(8bf2b43b-91c9-4531-a571-36060412860e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.312386     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-gfklr" podUID="8bf2b43b-91c9-4531-a571-36060412860e"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.317109     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.317252     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.319138     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.319272     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:25:47 ha-254035 kubelet[795]: I1017 19:25:47.321488     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.321734     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.322802     795 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-548b2_kube-system(4b772887-90df-4871-9343-69349bdda859): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:25:47 ha-254035 kubelet[795]: E1017 19:25:47.322858     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-548b2" podUID="4b772887-90df-4871-9343-69349bdda859"
	Oct 17 19:25:47 ha-254035 kubelet[795]: I1017 19:25:47.952228     795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f120554cc7e7eb74e29c79f31815613" path="/var/lib/kubelet/pods/4f120554cc7e7eb74e29c79f31815613/volumes"
	Oct 17 19:25:48 ha-254035 kubelet[795]: I1017 19:25:48.323043     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:25:48 ha-254035 kubelet[795]: E1017 19:25:48.323207     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.831559     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f\": container with ID starting with 03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f not found: ID does not exist" containerID="03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f"
	Oct 17 19:25:51 ha-254035 kubelet[795]: I1017 19:25:51.831609     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f" err="rpc error: code = NotFound desc = could not find container \"03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f\": container with ID starting with 03470d76597f9b6c687fb760070a93426d27f3c0f7970222ccd19d14d2affb5f not found: ID does not exist"
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.832065     795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f\": container with ID starting with 37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f not found: ID does not exist" containerID="37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f"
	Oct 17 19:25:51 ha-254035 kubelet[795]: I1017 19:25:51.832099     795 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f" err="rpc error: code = NotFound desc = could not find container \"37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f\": container with ID starting with 37f378576ff44f5cd1ccff55de48495bda098525ad6fb1d91c1ef854b4fdd99f not found: ID does not exist"
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.918773     795 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a4e6e217ea695149c5a154bbecbc7798ca28f6ae40caa311c266f47def107466/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a4e6e217ea695149c5a154bbecbc7798ca28f6ae40caa311c266f47def107466/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/3.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/3.log: no such file or directory
	Oct 17 19:25:51 ha-254035 kubelet[795]: E1017 19:25:51.921773     795 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/880b7d2432f854b1d2e4221c38cbcfa637187b519d26b99deb22f9bb126c2b9f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/880b7d2432f854b1d2e4221c38cbcfa637187b519d26b99deb22f9bb126c2b9f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/2.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-254035_9046e63156250f7e5e453bf172e4f118/kube-controller-manager/2.log: no such file or directory
	Oct 17 19:25:59 ha-254035 kubelet[795]: I1017 19:25:59.951449     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:25:59 ha-254035 kubelet[795]: E1017 19:25:59.951658     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:26:14 ha-254035 kubelet[795]: I1017 19:26:14.950613     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:26:14 ha-254035 kubelet[795]: E1017 19:26:14.950806     795 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-254035_kube-system(9046e63156250f7e5e453bf172e4f118)\"" pod="kube-system/kube-controller-manager-ha-254035" podUID="9046e63156250f7e5e453bf172e4f118"
	Oct 17 19:26:27 ha-254035 kubelet[795]: I1017 19:26:27.952669     795 scope.go:117] "RemoveContainer" containerID="8f2e18695e457839c6b48b8cf9525b8e3133c1a6d2c7b0e484fc6130ec820a7a"
	Oct 17 19:26:29 ha-254035 kubelet[795]: I1017 19:26:29.433310     795 scope.go:117] "RemoveContainer" containerID="f662d4e90719bc39bd008b62c1cbb5dd8676a08edeef61897f3e68749b418ff7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-254035 -n ha-254035
helpers_test.go:269: (dbg) Run:  kubectl --context ha-254035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (4.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (14.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 stop --alsologtostderr -v 5: (13.930807818s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5: exit status 7 (127.88293ms)

                                                
                                                
-- stdout --
	ha-254035
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-254035-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-254035-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-254035-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:32:05.502029  324914 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:32:05.502158  324914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:32:05.502194  324914 out.go:374] Setting ErrFile to fd 2...
	I1017 19:32:05.502205  324914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:32:05.502460  324914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:32:05.502648  324914 out.go:368] Setting JSON to false
	I1017 19:32:05.502690  324914 mustload.go:65] Loading cluster: ha-254035
	I1017 19:32:05.502783  324914 notify.go:220] Checking for updates...
	I1017 19:32:05.503068  324914 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:05.503077  324914 status.go:174] checking status of ha-254035 ...
	I1017 19:32:05.503595  324914 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:05.522750  324914 status.go:371] ha-254035 host status = "Stopped" (err=<nil>)
	I1017 19:32:05.522776  324914 status.go:384] host is not running, skipping remaining checks
	I1017 19:32:05.522783  324914 status.go:176] ha-254035 status: &{Name:ha-254035 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:32:05.522806  324914 status.go:174] checking status of ha-254035-m02 ...
	I1017 19:32:05.523103  324914 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:32:05.550587  324914 status.go:371] ha-254035-m02 host status = "Stopped" (err=<nil>)
	I1017 19:32:05.550631  324914 status.go:384] host is not running, skipping remaining checks
	I1017 19:32:05.550638  324914 status.go:176] ha-254035-m02 status: &{Name:ha-254035-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:32:05.550659  324914 status.go:174] checking status of ha-254035-m03 ...
	I1017 19:32:05.550972  324914 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:32:05.567807  324914 status.go:371] ha-254035-m03 host status = "Stopped" (err=<nil>)
	I1017 19:32:05.567837  324914 status.go:384] host is not running, skipping remaining checks
	I1017 19:32:05.567844  324914 status.go:176] ha-254035-m03 status: &{Name:ha-254035-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:32:05.567864  324914 status.go:174] checking status of ha-254035-m04 ...
	I1017 19:32:05.568155  324914 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:32:05.584591  324914 status.go:371] ha-254035-m04 host status = "Stopped" (err=<nil>)
	I1017 19:32:05.584615  324914 status.go:384] host is not running, skipping remaining checks
	I1017 19:32:05.584634  324914 status.go:176] ha-254035-m04 status: &{Name:ha-254035-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-254035-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-254035-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-254035-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-254035-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-254035-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-254035-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-254035-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-254035-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-254035-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-254035
helpers_test.go:243: (dbg) docker inspect ha-254035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	        "Created": "2025-10-17T19:17:36.603472481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:23:44.340324163Z",
	            "FinishedAt": "2025-10-17T19:32:05.172940124Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hosts",
	        "LogPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8-json.log",
	        "Name": "/ha-254035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-254035:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-254035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	                "LowerDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-254035",
	                "Source": "/var/lib/docker/volumes/ha-254035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-254035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-254035",
	                "name.minikube.sigs.k8s.io": "ha-254035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-254035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f667d9c3ea201faa6573d33bffc4907012785051d424eb86a31b1e09eb8b135",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-254035",
	                        "7f770318d5dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-254035 -n ha-254035
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-254035 -n ha-254035: exit status 7 (71.56493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-254035" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (14.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (112s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m47.870409722s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5: (1.085915761s)
ha_test.go:573: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:576: status says not three hosts are running: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:579: status says not three kubelets are running: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:582: status says not two apiservers are running: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-254035
helpers_test.go:243: (dbg) docker inspect ha-254035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	        "Created": "2025-10-17T19:17:36.603472481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 325091,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:32:05.992149801Z",
	            "FinishedAt": "2025-10-17T19:32:05.172940124Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hosts",
	        "LogPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8-json.log",
	        "Name": "/ha-254035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-254035:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-254035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	                "LowerDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-254035",
	                "Source": "/var/lib/docker/volumes/ha-254035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-254035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-254035",
	                "name.minikube.sigs.k8s.io": "ha-254035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1b39170e4096374d7e684a87814d212baad95e741e4cc807dce61f43c877747",
	            "SandboxKey": "/var/run/docker/netns/b1b39170e409",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-254035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:e2:15:6d:bc:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f667d9c3ea201faa6573d33bffc4907012785051d424eb86a31b1e09eb8b135",
	                    "EndpointID": "e9462a0e2e3d7837432ea03485390bfaae7ae9afbbbbc20020bc0ae2782b8ba7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-254035",
	                        "7f770318d5dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-254035 -n ha-254035
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 logs -n 25: (1.747043165s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-254035 cp ha-254035-m03:/home/docker/cp-test.txt ha-254035-m04:/home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp testdata/cp-test.txt ha-254035-m04:/home/docker/cp-test.txt                                                             │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035-m04.txt │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035:/home/docker/cp-test_ha-254035-m04_ha-254035.txt                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035.txt                                                 │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m02 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m03:/home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node start m02 --alsologtostderr -v 5                                                                                      │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:23 UTC │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │ 17 Oct 25 19:23 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5                                                                                   │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	│ node    │ ha-254035 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │ 17 Oct 25 19:32 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:32 UTC │ 17 Oct 25 19:33 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:32:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:32:05.731928  324968 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:32:05.732103  324968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:32:05.732132  324968 out.go:374] Setting ErrFile to fd 2...
	I1017 19:32:05.732151  324968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:32:05.732432  324968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:32:05.732853  324968 out.go:368] Setting JSON to false
	I1017 19:32:05.733704  324968 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8077,"bootTime":1760721449,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:32:05.733797  324968 start.go:141] virtualization:  
	I1017 19:32:05.736996  324968 out.go:179] * [ha-254035] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:32:05.740976  324968 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:32:05.741039  324968 notify.go:220] Checking for updates...
	I1017 19:32:05.746791  324968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:32:05.749627  324968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:05.752435  324968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:32:05.755486  324968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:32:05.758645  324968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:32:05.762073  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:05.762786  324968 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:32:05.783133  324968 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:32:05.783261  324968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:32:05.840860  324968 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:32:05.83134404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:32:05.840970  324968 docker.go:318] overlay module found
	I1017 19:32:05.844001  324968 out.go:179] * Using the docker driver based on existing profile
	I1017 19:32:05.846818  324968 start.go:305] selected driver: docker
	I1017 19:32:05.846835  324968 start.go:925] validating driver "docker" against &{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:05.846996  324968 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:32:05.847094  324968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:32:05.907256  324968 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:32:05.898245791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:32:05.907667  324968 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:32:05.907704  324968 cni.go:84] Creating CNI manager for ""
	I1017 19:32:05.907768  324968 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:32:05.907825  324968 start.go:349] cluster config:
	{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:05.911004  324968 out.go:179] * Starting "ha-254035" primary control-plane node in "ha-254035" cluster
	I1017 19:32:05.913729  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:05.916410  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:05.919155  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:05.919202  324968 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 19:32:05.919216  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:05.919268  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:05.919311  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:05.919321  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:05.919466  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:05.938132  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:05.938154  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:05.938173  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:05.938195  324968 start.go:360] acquireMachinesLock for ha-254035: {Name:mka2e39989b9cf6078778e7f6519885462ea711f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:05.938260  324968 start.go:364] duration metric: took 36.741µs to acquireMachinesLock for "ha-254035"
	I1017 19:32:05.938292  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:05.938311  324968 fix.go:54] fixHost starting: 
	I1017 19:32:05.938563  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:05.955500  324968 fix.go:112] recreateIfNeeded on ha-254035: state=Stopped err=<nil>
	W1017 19:32:05.955532  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:05.958901  324968 out.go:252] * Restarting existing docker container for "ha-254035" ...
	I1017 19:32:05.958986  324968 cli_runner.go:164] Run: docker start ha-254035
	I1017 19:32:06.223945  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:06.246991  324968 kic.go:430] container "ha-254035" state is running.
	I1017 19:32:06.247441  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:06.267236  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:06.267478  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:06.267538  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:06.286531  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:06.287650  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:06.287670  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:06.288401  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:09.440064  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:32:09.440099  324968 ubuntu.go:182] provisioning hostname "ha-254035"
	I1017 19:32:09.440162  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:09.457351  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:09.457659  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:09.457674  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035 && echo "ha-254035" | sudo tee /etc/hostname
	I1017 19:32:09.613626  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:32:09.613711  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:09.630718  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:09.631029  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:09.631045  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:09.780773  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:09.780802  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:09.780820  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:09.780831  324968 provision.go:84] configureAuth start
	I1017 19:32:09.780894  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:09.801074  324968 provision.go:143] copyHostCerts
	I1017 19:32:09.801116  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:09.801147  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:09.801165  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:09.801244  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:09.801333  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:09.801350  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:09.801354  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:09.801381  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:09.801427  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:09.801450  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:09.801455  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:09.801479  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:09.801528  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035 san=[127.0.0.1 192.168.49.2 ha-254035 localhost minikube]
	I1017 19:32:10.886077  324968 provision.go:177] copyRemoteCerts
	I1017 19:32:10.886156  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:32:10.886202  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:10.904681  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.010120  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:32:11.010211  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:32:11.028108  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:32:11.028165  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1017 19:32:11.044982  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:32:11.045040  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:32:11.061816  324968 provision.go:87] duration metric: took 1.280961553s to configureAuth
	I1017 19:32:11.061844  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:32:11.062085  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:11.062193  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.080891  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:11.081208  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:11.081230  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:32:11.407184  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:32:11.407205  324968 machine.go:96] duration metric: took 5.139717317s to provisionDockerMachine
	I1017 19:32:11.407216  324968 start.go:293] postStartSetup for "ha-254035" (driver="docker")
	I1017 19:32:11.407226  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:32:11.407298  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:32:11.407335  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.427760  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.532299  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:32:11.535879  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:32:11.535910  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:32:11.535921  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:32:11.535995  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:32:11.536114  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:32:11.536128  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:32:11.536253  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:32:11.544245  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:11.561441  324968 start.go:296] duration metric: took 154.210245ms for postStartSetup
	I1017 19:32:11.561521  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:32:11.561565  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.578819  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.677440  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:32:11.681988  324968 fix.go:56] duration metric: took 5.74367054s for fixHost
	I1017 19:32:11.682016  324968 start.go:83] releasing machines lock for "ha-254035", held for 5.743742202s
	I1017 19:32:11.682098  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:11.699528  324968 ssh_runner.go:195] Run: cat /version.json
	I1017 19:32:11.699564  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:32:11.699581  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.699635  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.717585  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.718770  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.820235  324968 ssh_runner.go:195] Run: systemctl --version
	I1017 19:32:11.912550  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:32:11.950130  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:32:11.954364  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:32:11.954441  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:32:11.961885  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:32:11.961962  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:32:11.962000  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:32:11.962067  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:32:11.977362  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:32:11.990093  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:32:11.990161  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:32:12.005596  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:32:12.028034  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:32:12.152900  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:32:12.266767  324968 docker.go:234] disabling docker service ...
	I1017 19:32:12.266872  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:32:12.281703  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:32:12.294628  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:32:12.407632  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:32:12.520465  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:32:12.533571  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:32:12.547072  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:32:12.547164  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.555749  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:32:12.555816  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.564895  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.574036  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.582944  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:32:12.591372  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.600416  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.609166  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.618096  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:32:12.625617  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:32:12.633309  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:12.745158  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:32:12.879102  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:32:12.879171  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:32:12.883018  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:32:12.883079  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:32:12.886642  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:32:12.910860  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:32:12.910959  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:12.937450  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:12.969308  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:32:12.971996  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:32:12.987690  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:32:12.991595  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:13.001105  324968 kubeadm.go:883] updating cluster {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:32:13.001261  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:13.001318  324968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:32:13.038776  324968 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:32:13.038803  324968 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:32:13.038896  324968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:32:13.068706  324968 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:32:13.068731  324968 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:32:13.068740  324968 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:32:13.068844  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:32:13.068920  324968 ssh_runner.go:195] Run: crio config
	I1017 19:32:13.128454  324968 cni.go:84] Creating CNI manager for ""
	I1017 19:32:13.128483  324968 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:32:13.128514  324968 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:32:13.128575  324968 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-254035 NodeName:ha-254035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:32:13.128708  324968 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-254035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:32:13.128729  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:32:13.128779  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:32:13.140710  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:13.140824  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:32:13.140891  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:32:13.148269  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:32:13.148357  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 19:32:13.156108  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 19:32:13.168572  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:32:13.181432  324968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 19:32:13.193977  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:32:13.207012  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:32:13.210795  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:13.220459  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:13.334243  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:13.350459  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.2
	I1017 19:32:13.350480  324968 certs.go:195] generating shared ca certs ...
	I1017 19:32:13.350496  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:13.350630  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:32:13.350673  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:32:13.350681  324968 certs.go:257] generating profile certs ...
	I1017 19:32:13.350760  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:32:13.350837  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea
	I1017 19:32:13.350876  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:32:13.350885  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:32:13.350898  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:32:13.350908  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:32:13.350918  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:32:13.350928  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:32:13.350941  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:32:13.350951  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:32:13.350962  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:32:13.351012  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:32:13.351041  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:32:13.351048  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:32:13.351070  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:32:13.351095  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:32:13.351117  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:32:13.351161  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:13.351191  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.351207  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.351219  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.351856  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:32:13.375776  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:32:13.394623  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:32:13.413878  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:32:13.434296  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:32:13.456687  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:32:13.484245  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:32:13.505393  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:32:13.528512  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:32:13.550651  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:32:13.581215  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:32:13.601377  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:32:13.617352  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:32:13.624146  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:32:13.633165  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.637212  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.637279  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.680086  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:32:13.689010  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:32:13.698044  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.701888  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.701957  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.744236  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:32:13.752213  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:32:13.760295  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.764256  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.764320  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.806422  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:32:13.814023  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:32:13.817664  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:32:13.858251  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:32:13.899329  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:32:13.940348  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:32:13.981700  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:32:14.022967  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:32:14.071872  324968 kubeadm.go:400] StartCluster: {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:14.072073  324968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:32:14.072171  324968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:32:14.159623  324968 cri.go:89] found id: "0652fd27f5bff0f3d194b39abbb92602f049204bb45577d9a403537b5949c8cc"
	I1017 19:32:14.159695  324968 cri.go:89] found id: ""
	I1017 19:32:14.159788  324968 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:32:14.178262  324968 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:32:14Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:32:14.178424  324968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:32:14.193618  324968 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:32:14.193677  324968 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:32:14.193771  324968 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:32:14.214880  324968 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:14.215386  324968 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-254035" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:14.215555  324968 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-254035" cluster setting kubeconfig missing "ha-254035" context setting]
	I1017 19:32:14.215920  324968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.216577  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:32:14.217294  324968 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 19:32:14.217346  324968 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 19:32:14.217362  324968 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 19:32:14.217367  324968 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 19:32:14.217427  324968 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 19:32:14.217452  324968 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 19:32:14.217940  324968 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:32:14.232358  324968 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 19:32:14.232432  324968 kubeadm.go:601] duration metric: took 38.716713ms to restartPrimaryControlPlane
	I1017 19:32:14.232455  324968 kubeadm.go:402] duration metric: took 160.594092ms to StartCluster
	I1017 19:32:14.232498  324968 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.232662  324968 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:14.233403  324968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.233677  324968 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:32:14.233733  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:32:14.233763  324968 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:32:14.234454  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:14.239733  324968 out.go:179] * Enabled addons: 
	I1017 19:32:14.243909  324968 addons.go:514] duration metric: took 10.136788ms for enable addons: enabled=[]
	I1017 19:32:14.243996  324968 start.go:246] waiting for cluster config update ...
	I1017 19:32:14.244021  324968 start.go:255] writing updated cluster config ...
	I1017 19:32:14.247787  324968 out.go:203] 
	I1017 19:32:14.251318  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:14.251508  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.254862  324968 out.go:179] * Starting "ha-254035-m02" control-plane node in "ha-254035" cluster
	I1017 19:32:14.258139  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:14.261425  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:14.264451  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:14.264576  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:14.264510  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:14.264972  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:14.265018  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:14.265234  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.286925  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:14.286943  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:14.286955  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:14.286977  324968 start.go:360] acquireMachinesLock for ha-254035-m02: {Name:mkcf59557cfb2c18712510006a9b88f53e9d8916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:14.287029  324968 start.go:364] duration metric: took 36.003µs to acquireMachinesLock for "ha-254035-m02"
	I1017 19:32:14.287048  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:14.287054  324968 fix.go:54] fixHost starting: m02
	I1017 19:32:14.287335  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:32:14.308380  324968 fix.go:112] recreateIfNeeded on ha-254035-m02: state=Stopped err=<nil>
	W1017 19:32:14.308406  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:14.312007  324968 out.go:252] * Restarting existing docker container for "ha-254035-m02" ...
	I1017 19:32:14.312096  324968 cli_runner.go:164] Run: docker start ha-254035-m02
	I1017 19:32:14.710881  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:32:14.738971  324968 kic.go:430] container "ha-254035-m02" state is running.
	I1017 19:32:14.739337  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:14.764764  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.765007  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:14.765074  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:14.794957  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:14.795271  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:14.795287  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:14.795888  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:17.992435  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:32:17.992457  324968 ubuntu.go:182] provisioning hostname "ha-254035-m02"
	I1017 19:32:17.992541  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:18.030394  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:18.030717  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:18.030730  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m02 && echo "ha-254035-m02" | sudo tee /etc/hostname
	I1017 19:32:18.238178  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:32:18.238358  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:18.269009  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:18.269312  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:18.269330  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:18.453189  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:18.453217  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:18.453238  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:18.453248  324968 provision.go:84] configureAuth start
	I1017 19:32:18.453312  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:18.494134  324968 provision.go:143] copyHostCerts
	I1017 19:32:18.494179  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:18.494213  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:18.494225  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:18.494315  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:18.494442  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:18.494469  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:18.494479  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:18.494510  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:18.494560  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:18.494584  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:18.494592  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:18.494620  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:18.494675  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m02 san=[127.0.0.1 192.168.49.3 ha-254035-m02 localhost minikube]
	I1017 19:32:19.339690  324968 provision.go:177] copyRemoteCerts
	I1017 19:32:19.339761  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:32:19.339805  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:19.360710  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:19.488967  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:32:19.489032  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:32:19.531594  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:32:19.531655  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:32:19.572626  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:32:19.572693  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:32:19.617410  324968 provision.go:87] duration metric: took 1.16414737s to configureAuth
	I1017 19:32:19.617479  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:32:19.617739  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:19.617872  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:19.658286  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:19.658598  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:19.658613  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:32:20.717397  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:32:20.717469  324968 machine.go:96] duration metric: took 5.952443469s to provisionDockerMachine
	I1017 19:32:20.717493  324968 start.go:293] postStartSetup for "ha-254035-m02" (driver="docker")
	I1017 19:32:20.717527  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:32:20.717636  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:32:20.717717  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:20.738048  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:20.853074  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:32:20.857246  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:32:20.857278  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:32:20.857289  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:32:20.857346  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:32:20.857423  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:32:20.857437  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:32:20.857537  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:32:20.866006  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:20.886225  324968 start.go:296] duration metric: took 168.70092ms for postStartSetup
	I1017 19:32:20.886334  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:32:20.886398  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:20.912756  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.034286  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:32:21.042383  324968 fix.go:56] duration metric: took 6.755322442s for fixHost
	I1017 19:32:21.042417  324968 start.go:83] releasing machines lock for "ha-254035-m02", held for 6.755380378s
	I1017 19:32:21.042509  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:21.067009  324968 out.go:179] * Found network options:
	I1017 19:32:21.069796  324968 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 19:32:21.072617  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:32:21.072667  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:32:21.072737  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:32:21.072783  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:21.072798  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:32:21.072853  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:21.106980  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.116734  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.321123  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:32:21.398151  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:32:21.398260  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:32:21.429985  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:32:21.430019  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:32:21.430052  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:32:21.430120  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:32:21.469545  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:32:21.499838  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:32:21.499915  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:32:21.546298  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:32:21.574508  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:32:22.043397  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:32:22.346332  324968 docker.go:234] disabling docker service ...
	I1017 19:32:22.346414  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:32:22.366415  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:32:22.385363  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:32:22.610088  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:32:22.882540  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:32:22.898584  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:32:22.925839  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:32:22.925982  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.941214  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:32:22.941380  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.952790  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.964392  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.976274  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:32:22.986631  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.999122  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:23.017402  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:23.031048  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:32:23.041313  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:32:23.054658  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:23.287821  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:32:23.539139  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:32:23.539262  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:32:23.543731  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:32:23.543842  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:32:23.550732  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:32:23.592317  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:32:23.592405  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:23.642337  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:23.710060  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:32:23.713120  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:32:23.716299  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:32:23.744818  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:32:23.750008  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:23.771597  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:32:23.771839  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:23.772139  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:23.805838  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:32:23.806449  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.3
	I1017 19:32:23.806468  324968 certs.go:195] generating shared ca certs ...
	I1017 19:32:23.806508  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:23.809795  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:32:23.809866  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:32:23.809883  324968 certs.go:257] generating profile certs ...
	I1017 19:32:23.809976  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:32:23.810032  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.5a836dc6
	I1017 19:32:23.810076  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:32:23.810089  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:32:23.810105  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:32:23.810121  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:32:23.810138  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:32:23.810155  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:32:23.810173  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:32:23.810185  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:32:23.810197  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:32:23.810249  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:32:23.810281  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:32:23.810294  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:32:23.810326  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:32:23.810354  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:32:23.810380  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:32:23.810425  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:23.810467  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:23.810484  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:32:23.810495  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:32:23.810560  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:23.830858  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:23.928800  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:32:23.933176  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:32:23.948803  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:32:23.953564  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:32:23.963833  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:32:23.970797  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:32:23.980707  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:32:23.985094  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:32:23.994719  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:32:23.998983  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:32:24.010610  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:32:24.015549  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:32:24.026675  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:32:24.046169  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:32:24.065010  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:32:24.083555  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:32:24.101835  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:32:24.121645  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:32:24.140364  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:32:24.158250  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:32:24.175078  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:32:24.192107  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:32:24.210093  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:32:24.227779  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:32:24.240287  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:32:24.253704  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:32:24.268887  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:32:24.281554  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:32:24.294030  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:32:24.307056  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:32:24.319713  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:32:24.326454  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:32:24.334896  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.338984  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.339069  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.382244  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:32:24.389973  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:32:24.397963  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.402178  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.402260  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.445450  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:32:24.454057  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:32:24.462416  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.469188  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.469265  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.513771  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:32:24.526391  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:32:24.532093  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:32:24.577438  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:32:24.619730  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:32:24.661938  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:32:24.706695  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:32:24.750711  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:32:24.792693  324968 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 19:32:24.792815  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:32:24.792847  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:32:24.792907  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:32:24.805902  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:24.805963  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:32:24.806034  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:32:24.815558  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:32:24.815637  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:32:24.823591  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:32:24.837169  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:32:24.849790  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:32:24.870243  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:32:24.879498  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:24.891396  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:25.079299  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:25.098478  324968 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:32:25.098820  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:25.104996  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:32:25.107746  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:25.272984  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:25.289585  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:32:25.289670  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:32:25.289939  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m02" to be "Ready" ...
	W1017 19:32:45.698726  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:47.846677  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:50.300191  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:52.794234  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	I1017 19:32:55.298996  324968 node_ready.go:49] node "ha-254035-m02" is "Ready"
	I1017 19:32:55.299027  324968 node_ready.go:38] duration metric: took 30.009056285s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:32:55.299042  324968 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:32:55.299101  324968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:32:55.311396  324968 api_server.go:72] duration metric: took 30.212852853s to wait for apiserver process to appear ...
	I1017 19:32:55.311421  324968 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:32:55.311440  324968 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:32:55.321736  324968 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:32:55.323225  324968 api_server.go:141] control plane version: v1.34.1
	I1017 19:32:55.323289  324968 api_server.go:131] duration metric: took 11.860591ms to wait for apiserver health ...
	I1017 19:32:55.323326  324968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:32:55.332734  324968 system_pods.go:59] 26 kube-system pods found
	I1017 19:32:55.332788  324968 system_pods.go:61] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.332797  324968 system_pods.go:61] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.332809  324968 system_pods.go:61] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:32:55.332819  324968 system_pods.go:61] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:32:55.332827  324968 system_pods.go:61] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:32:55.332832  324968 system_pods.go:61] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:32:55.332845  324968 system_pods.go:61] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.332850  324968 system_pods.go:61] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:32:55.332863  324968 system_pods.go:61] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.332872  324968 system_pods.go:61] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:32:55.332881  324968 system_pods.go:61] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:32:55.332886  324968 system_pods.go:61] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:32:55.332893  324968 system_pods.go:61] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.332905  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.332913  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:32:55.332921  324968 system_pods.go:61] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.332931  324968 system_pods.go:61] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.332936  324968 system_pods.go:61] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:32:55.332941  324968 system_pods.go:61] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:32:55.332953  324968 system_pods.go:61] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.332964  324968 system_pods.go:61] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.332973  324968 system_pods.go:61] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:32:55.332981  324968 system_pods.go:61] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:32:55.332985  324968 system_pods.go:61] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:32:55.332989  324968 system_pods.go:61] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:32:55.333000  324968 system_pods.go:61] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:32:55.333009  324968 system_pods.go:74] duration metric: took 9.659246ms to wait for pod list to return data ...
	I1017 19:32:55.333022  324968 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:32:55.344111  324968 default_sa.go:45] found service account: "default"
	I1017 19:32:55.344138  324968 default_sa.go:55] duration metric: took 11.10916ms for default service account to be created ...
	I1017 19:32:55.344149  324968 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:32:55.351885  324968 system_pods.go:86] 26 kube-system pods found
	I1017 19:32:55.351922  324968 system_pods.go:89] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.351933  324968 system_pods.go:89] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.351940  324968 system_pods.go:89] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:32:55.351947  324968 system_pods.go:89] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:32:55.351952  324968 system_pods.go:89] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:32:55.351957  324968 system_pods.go:89] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:32:55.351966  324968 system_pods.go:89] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.351971  324968 system_pods.go:89] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:32:55.351986  324968 system_pods.go:89] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.351997  324968 system_pods.go:89] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:32:55.352003  324968 system_pods.go:89] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:32:55.352010  324968 system_pods.go:89] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:32:55.352019  324968 system_pods.go:89] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.352031  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.352036  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:32:55.352043  324968 system_pods.go:89] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.352051  324968 system_pods.go:89] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.352056  324968 system_pods.go:89] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:32:55.352062  324968 system_pods.go:89] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:32:55.352068  324968 system_pods.go:89] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.352086  324968 system_pods.go:89] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.352091  324968 system_pods.go:89] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:32:55.352096  324968 system_pods.go:89] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:32:55.352100  324968 system_pods.go:89] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:32:55.352108  324968 system_pods.go:89] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:32:55.352116  324968 system_pods.go:89] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:32:55.352123  324968 system_pods.go:126] duration metric: took 7.969634ms to wait for k8s-apps to be running ...
	I1017 19:32:55.352135  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:32:55.352192  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:32:55.367145  324968 system_svc.go:56] duration metric: took 14.999806ms WaitForService to wait for kubelet
	I1017 19:32:55.367171  324968 kubeadm.go:586] duration metric: took 30.268632021s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:32:55.367192  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:32:55.370727  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370762  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370773  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370778  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370782  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370786  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370790  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370793  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370798  324968 node_conditions.go:105] duration metric: took 3.600536ms to run NodePressure ...
	I1017 19:32:55.370811  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:32:55.370845  324968 start.go:255] writing updated cluster config ...
	I1017 19:32:55.374424  324968 out.go:203] 
	I1017 19:32:55.377636  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:55.377758  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.381262  324968 out.go:179] * Starting "ha-254035-m03" control-plane node in "ha-254035" cluster
	I1017 19:32:55.385137  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:55.388169  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:55.391014  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:55.391065  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:55.391130  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:55.391213  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:55.391250  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:55.391408  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.410277  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:55.410300  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:55.410323  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:55.410347  324968 start.go:360] acquireMachinesLock for ha-254035-m03: {Name:mked9f1e3aab9db3df3b59f9799fd7eb1b9dc756 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:55.410421  324968 start.go:364] duration metric: took 54.473µs to acquireMachinesLock for "ha-254035-m03"
	I1017 19:32:55.410445  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:55.410454  324968 fix.go:54] fixHost starting: m03
	I1017 19:32:55.410732  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:32:55.427703  324968 fix.go:112] recreateIfNeeded on ha-254035-m03: state=Stopped err=<nil>
	W1017 19:32:55.427730  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:55.431363  324968 out.go:252] * Restarting existing docker container for "ha-254035-m03" ...
	I1017 19:32:55.431457  324968 cli_runner.go:164] Run: docker start ha-254035-m03
	I1017 19:32:55.755807  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:32:55.777127  324968 kic.go:430] container "ha-254035-m03" state is running.
	I1017 19:32:55.777489  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:32:55.800244  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.800494  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:55.800582  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:55.829783  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:55.830097  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:55.830107  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:55.830700  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:59.026446  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m03
	
	I1017 19:32:59.026469  324968 ubuntu.go:182] provisioning hostname "ha-254035-m03"
	I1017 19:32:59.026531  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:59.057027  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:59.057341  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:59.057359  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m03 && echo "ha-254035-m03" | sudo tee /etc/hostname
	I1017 19:32:59.282090  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m03
	
	I1017 19:32:59.282168  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:59.325073  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:59.325398  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:59.325420  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:59.509111  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:59.509181  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:59.509265  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:59.509297  324968 provision.go:84] configureAuth start
	I1017 19:32:59.509400  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:32:59.548783  324968 provision.go:143] copyHostCerts
	I1017 19:32:59.548834  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:59.548871  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:59.548878  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:59.548957  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:59.549040  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:59.549072  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:59.549078  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:59.549106  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:59.549151  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:59.549168  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:59.549172  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:59.549195  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:59.549242  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m03 san=[127.0.0.1 192.168.49.4 ha-254035-m03 localhost minikube]
	I1017 19:33:00.043691  324968 provision.go:177] copyRemoteCerts
	I1017 19:33:00.043871  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:33:00.043944  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.064471  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:00.223369  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:33:00.223446  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:33:00.260611  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:33:00.260683  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:33:00.317143  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:33:00.317306  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:33:00.385743  324968 provision.go:87] duration metric: took 876.417393ms to configureAuth
	I1017 19:33:00.385819  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:33:00.386115  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:00.386276  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.432179  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:00.432495  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:33:00.432512  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:33:00.901503  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:33:00.901591  324968 machine.go:96] duration metric: took 5.101084009s to provisionDockerMachine
	I1017 19:33:00.901618  324968 start.go:293] postStartSetup for "ha-254035-m03" (driver="docker")
	I1017 19:33:00.901662  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:33:00.901753  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:33:00.901835  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.927269  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.051646  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:33:01.055666  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:33:01.055692  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:33:01.055704  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:33:01.055763  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:33:01.055854  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:33:01.055866  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:33:01.055965  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:33:01.066853  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:01.101261  324968 start.go:296] duration metric: took 199.597831ms for postStartSetup
	I1017 19:33:01.101355  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:33:01.101408  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.130630  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.323449  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:33:01.379781  324968 fix.go:56] duration metric: took 5.969318931s for fixHost
	I1017 19:33:01.379809  324968 start.go:83] releasing machines lock for "ha-254035-m03", held for 5.969375603s
	I1017 19:33:01.379881  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:33:01.416934  324968 out.go:179] * Found network options:
	I1017 19:33:01.419424  324968 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1017 19:33:01.422873  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422914  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422951  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422967  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:33:01.423035  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:33:01.423092  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.423496  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:33:01.423560  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.460787  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.468755  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.901807  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:33:02.054376  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:33:02.054456  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:33:02.063698  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:33:02.063723  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:33:02.063757  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:33:02.063816  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:33:02.083121  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:33:02.099886  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:33:02.099962  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:33:02.129631  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:33:02.146247  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:33:02.487383  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:33:02.778663  324968 docker.go:234] disabling docker service ...
	I1017 19:33:02.778765  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:33:02.797150  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:33:02.816103  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:33:03.072265  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:33:03.311051  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:33:03.337034  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:33:03.367080  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:33:03.367228  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.379211  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:33:03.379292  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.403390  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.417512  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.434353  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:33:03.450504  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.465403  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.497155  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.516048  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:33:03.527113  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:33:03.546234  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:03.821017  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:33:05.091469  324968 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.270414549s)
	I1017 19:33:05.091496  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:33:05.091552  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:33:05.096822  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:33:05.096899  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:33:05.102601  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:33:05.133868  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:33:05.133956  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:05.169578  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:05.203999  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:33:05.206796  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:33:05.209777  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 19:33:05.212751  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:33:05.237841  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:33:05.242830  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:05.255230  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:33:05.255472  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:05.255718  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:33:05.273658  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:33:05.273934  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.4
	I1017 19:33:05.273942  324968 certs.go:195] generating shared ca certs ...
	I1017 19:33:05.273956  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:33:05.274063  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:33:05.274105  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:33:05.274111  324968 certs.go:257] generating profile certs ...
	I1017 19:33:05.274183  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:33:05.274262  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.db0a5916
	I1017 19:33:05.274301  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:33:05.274310  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:33:05.274333  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:33:05.274345  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:33:05.274357  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:33:05.274367  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:33:05.274379  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:33:05.274397  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:33:05.274409  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:33:05.274457  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:33:05.274485  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:33:05.274493  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:33:05.274518  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:33:05.274539  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:33:05.274559  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:33:05.274597  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:05.274622  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.274637  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.274648  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.274703  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:33:05.302509  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:33:05.404899  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:33:05.408751  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:33:05.417079  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:33:05.420443  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:33:05.429786  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:33:05.433515  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:33:05.442432  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:33:05.446029  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:33:05.456258  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:33:05.460045  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:33:05.468819  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:33:05.473279  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:33:05.482460  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:33:05.502746  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:33:05.521060  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:33:05.540206  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:33:05.559261  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:33:05.579914  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:33:05.607376  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:33:05.624208  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:33:05.643462  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:33:05.663238  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:33:05.685107  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:33:05.703927  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:33:05.716945  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:33:05.730309  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:33:05.744332  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:33:05.760823  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:33:05.781849  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:33:05.797383  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:33:05.815449  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:33:05.822374  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:33:05.830919  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.835675  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.835801  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.879325  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:33:05.888083  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:33:05.896261  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.900178  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.900239  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.943707  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:33:05.952618  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:33:05.961373  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.964981  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.965094  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:33:06.008396  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:33:06.017978  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:33:06.022220  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:33:06.064442  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:33:06.106411  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:33:06.147611  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:33:06.191689  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:33:06.235810  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:33:06.278610  324968 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1017 19:33:06.278711  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:33:06.278740  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:33:06.278801  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:33:06.292033  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:33:06.292094  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:33:06.292151  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:33:06.300562  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:33:06.300652  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:33:06.314364  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:33:06.329602  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:33:06.360017  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:33:06.379948  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:33:06.383943  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:06.395455  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:06.558780  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:06.573849  324968 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:33:06.574138  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:06.579819  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:33:06.582763  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:06.726699  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:06.743509  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:33:06.743622  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:33:06.743944  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m03" to be "Ready" ...
	W1017 19:33:08.748353  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:11.248113  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:13.747938  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:16.248008  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:18.248671  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:20.249311  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:22.747279  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:24.747653  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:26.749385  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	I1017 19:33:27.747523  324968 node_ready.go:49] node "ha-254035-m03" is "Ready"
	I1017 19:33:27.747558  324968 node_ready.go:38] duration metric: took 21.003579566s for node "ha-254035-m03" to be "Ready" ...
	I1017 19:33:27.747571  324968 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:33:27.747631  324968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:33:27.766700  324968 api_server.go:72] duration metric: took 21.192473888s to wait for apiserver process to appear ...
	I1017 19:33:27.766729  324968 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:33:27.766753  324968 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:33:27.775571  324968 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:33:27.776498  324968 api_server.go:141] control plane version: v1.34.1
	I1017 19:33:27.776585  324968 api_server.go:131] duration metric: took 9.846294ms to wait for apiserver health ...
	I1017 19:33:27.776595  324968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:33:27.783374  324968 system_pods.go:59] 26 kube-system pods found
	I1017 19:33:27.783414  324968 system_pods.go:61] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running
	I1017 19:33:27.783426  324968 system_pods.go:61] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:33:27.783431  324968 system_pods.go:61] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:33:27.783438  324968 system_pods.go:61] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running
	I1017 19:33:27.783442  324968 system_pods.go:61] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:33:27.783446  324968 system_pods.go:61] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:33:27.783450  324968 system_pods.go:61] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running
	I1017 19:33:27.783455  324968 system_pods.go:61] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:33:27.783464  324968 system_pods.go:61] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running
	I1017 19:33:27.783469  324968 system_pods.go:61] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running
	I1017 19:33:27.783480  324968 system_pods.go:61] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:33:27.783484  324968 system_pods.go:61] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:33:27.783489  324968 system_pods.go:61] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running
	I1017 19:33:27.783500  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running
	I1017 19:33:27.783505  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:33:27.783509  324968 system_pods.go:61] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running
	I1017 19:33:27.783519  324968 system_pods.go:61] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running
	I1017 19:33:27.783524  324968 system_pods.go:61] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:33:27.783528  324968 system_pods.go:61] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:33:27.783532  324968 system_pods.go:61] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running
	I1017 19:33:27.783536  324968 system_pods.go:61] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running
	I1017 19:33:27.783541  324968 system_pods.go:61] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:33:27.783545  324968 system_pods.go:61] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:33:27.783552  324968 system_pods.go:61] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:33:27.783556  324968 system_pods.go:61] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:33:27.783564  324968 system_pods.go:61] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running
	I1017 19:33:27.783569  324968 system_pods.go:74] duration metric: took 6.965509ms to wait for pod list to return data ...
	I1017 19:33:27.783582  324968 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:33:27.788939  324968 default_sa.go:45] found service account: "default"
	I1017 19:33:27.788978  324968 default_sa.go:55] duration metric: took 5.380156ms for default service account to be created ...
	I1017 19:33:27.788989  324968 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:33:27.884397  324968 system_pods.go:86] 26 kube-system pods found
	I1017 19:33:27.884440  324968 system_pods.go:89] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running
	I1017 19:33:27.884450  324968 system_pods.go:89] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:33:27.884456  324968 system_pods.go:89] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:33:27.884462  324968 system_pods.go:89] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running
	I1017 19:33:27.884466  324968 system_pods.go:89] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:33:27.884475  324968 system_pods.go:89] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:33:27.884478  324968 system_pods.go:89] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running
	I1017 19:33:27.884482  324968 system_pods.go:89] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:33:27.884494  324968 system_pods.go:89] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running
	I1017 19:33:27.884505  324968 system_pods.go:89] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running
	I1017 19:33:27.884525  324968 system_pods.go:89] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:33:27.884531  324968 system_pods.go:89] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:33:27.884535  324968 system_pods.go:89] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running
	I1017 19:33:27.884540  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running
	I1017 19:33:27.884545  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:33:27.884559  324968 system_pods.go:89] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running
	I1017 19:33:27.884563  324968 system_pods.go:89] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running
	I1017 19:33:27.884567  324968 system_pods.go:89] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:33:27.884571  324968 system_pods.go:89] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:33:27.884602  324968 system_pods.go:89] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running
	I1017 19:33:27.884606  324968 system_pods.go:89] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running
	I1017 19:33:27.884610  324968 system_pods.go:89] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:33:27.884614  324968 system_pods.go:89] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:33:27.884618  324968 system_pods.go:89] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:33:27.884622  324968 system_pods.go:89] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:33:27.884630  324968 system_pods.go:89] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running
	I1017 19:33:27.884636  324968 system_pods.go:126] duration metric: took 95.641254ms to wait for k8s-apps to be running ...
	I1017 19:33:27.884659  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:33:27.884730  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:33:27.903571  324968 system_svc.go:56] duration metric: took 18.903653ms WaitForService to wait for kubelet
	I1017 19:33:27.903609  324968 kubeadm.go:586] duration metric: took 21.32938831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:33:27.903634  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:33:27.907627  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907667  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907680  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907685  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907689  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907694  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907697  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907701  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907706  324968 node_conditions.go:105] duration metric: took 4.066189ms to run NodePressure ...
	I1017 19:33:27.907719  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:33:27.907751  324968 start.go:255] writing updated cluster config ...
	I1017 19:33:27.911402  324968 out.go:203] 
	I1017 19:33:27.915521  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:27.915649  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:27.918913  324968 out.go:179] * Starting "ha-254035-m04" worker node in "ha-254035" cluster
	I1017 19:33:27.921713  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:33:27.924620  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:33:27.927532  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:33:27.927564  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:33:27.927567  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:33:27.927721  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:33:27.927731  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:33:27.927887  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:27.960833  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:33:27.960852  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:33:27.960865  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:33:27.960889  324968 start.go:360] acquireMachinesLock for ha-254035-m04: {Name:mk584e2cd96462cdaa6d1f2088a137ff40c48733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:33:27.960940  324968 start.go:364] duration metric: took 36.438µs to acquireMachinesLock for "ha-254035-m04"
	I1017 19:33:27.960959  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:33:27.960964  324968 fix.go:54] fixHost starting: m04
	I1017 19:33:27.961255  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:33:27.995390  324968 fix.go:112] recreateIfNeeded on ha-254035-m04: state=Stopped err=<nil>
	W1017 19:33:27.995487  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:33:27.999207  324968 out.go:252] * Restarting existing docker container for "ha-254035-m04" ...
	I1017 19:33:27.999295  324968 cli_runner.go:164] Run: docker start ha-254035-m04
	I1017 19:33:28.394503  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:33:28.421995  324968 kic.go:430] container "ha-254035-m04" state is running.
	I1017 19:33:28.422449  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:28.441865  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:28.442116  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:33:28.442199  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:28.474872  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:28.475264  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:28.475277  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:33:28.476011  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:33:31.633234  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m04
	
	I1017 19:33:31.633323  324968 ubuntu.go:182] provisioning hostname "ha-254035-m04"
	I1017 19:33:31.633415  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:31.653177  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:31.653483  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:31.653500  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m04 && echo "ha-254035-m04" | sudo tee /etc/hostname
	I1017 19:33:31.837574  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m04
	
	I1017 19:33:31.837648  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:31.855639  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:31.855942  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:31.855960  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:33:32.021671  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:33:32.021700  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:33:32.021717  324968 ubuntu.go:190] setting up certificates
	I1017 19:33:32.021728  324968 provision.go:84] configureAuth start
	I1017 19:33:32.021791  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:32.058708  324968 provision.go:143] copyHostCerts
	I1017 19:33:32.058751  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:33:32.058799  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:33:32.058807  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:33:32.058887  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:33:32.058963  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:33:32.058981  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:33:32.058986  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:33:32.059011  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:33:32.059054  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:33:32.059070  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:33:32.059074  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:33:32.059096  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:33:32.059142  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m04 san=[127.0.0.1 192.168.49.5 ha-254035-m04 localhost minikube]
	I1017 19:33:32.315144  324968 provision.go:177] copyRemoteCerts
	I1017 19:33:32.315269  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:33:32.315346  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.336727  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:32.451884  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:33:32.451953  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:33:32.477259  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:33:32.477335  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:33:32.496861  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:33:32.496932  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:33:32.517190  324968 provision.go:87] duration metric: took 495.446144ms to configureAuth
	I1017 19:33:32.517214  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:33:32.517497  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:32.517606  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.538066  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:32.538377  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:32.538397  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:33:32.868308  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:33:32.868331  324968 machine.go:96] duration metric: took 4.426196148s to provisionDockerMachine
	I1017 19:33:32.868343  324968 start.go:293] postStartSetup for "ha-254035-m04" (driver="docker")
	I1017 19:33:32.868353  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:33:32.868430  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:33:32.868488  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.888400  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.003003  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:33:33.008119  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:33:33.008155  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:33:33.008169  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:33:33.008242  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:33:33.008327  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:33:33.008339  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:33:33.008446  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:33:33.018512  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:33.048826  324968 start.go:296] duration metric: took 180.468283ms for postStartSetup
	I1017 19:33:33.048927  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:33:33.048979  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.068864  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.183386  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:33:33.188620  324968 fix.go:56] duration metric: took 5.227645919s for fixHost
	I1017 19:33:33.188649  324968 start.go:83] releasing machines lock for "ha-254035-m04", held for 5.227700884s
	I1017 19:33:33.188718  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:33.212152  324968 out.go:179] * Found network options:
	I1017 19:33:33.215093  324968 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1017 19:33:33.217835  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217871  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217882  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217906  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217916  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217926  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:33:33.217995  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:33:33.218040  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.218316  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:33:33.218377  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.247548  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.256825  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.415645  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:33:33.492514  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:33:33.492637  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:33:33.500683  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:33:33.500716  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:33:33.500752  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:33:33.500801  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:33:33.517445  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:33:33.537937  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:33:33.538053  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:33:33.556447  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:33:33.576435  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:33:33.721164  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:33:33.856018  324968 docker.go:234] disabling docker service ...
	I1017 19:33:33.856163  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:33:33.874251  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:33:33.889153  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:33:34.059244  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:33:34.205588  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:33:34.223596  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:33:34.248335  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:33:34.248449  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.259664  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:33:34.259750  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.274225  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.284260  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.293374  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:33:34.301939  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.313190  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.322270  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.335994  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:33:34.345500  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:33:34.355597  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:34.485902  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:33:34.658593  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:33:34.658711  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:33:34.663315  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:33:34.663396  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:33:34.667245  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:33:34.704265  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:33:34.704411  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:34.738612  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:34.775046  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:33:34.777914  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:33:34.780845  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 19:33:34.783723  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1017 19:33:34.786627  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:33:34.808635  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:33:34.815185  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:34.827225  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:33:34.827480  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:34.827743  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:33:34.847031  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:33:34.847380  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.5
	I1017 19:33:34.847390  324968 certs.go:195] generating shared ca certs ...
	I1017 19:33:34.847415  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:33:34.847641  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:33:34.847708  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:33:34.847720  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:33:34.847749  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:33:34.847765  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:33:34.847775  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:33:34.847869  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:33:34.847922  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:33:34.847932  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:33:34.847959  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:33:34.847999  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:33:34.848045  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:33:34.848123  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:34.848155  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:33:34.848175  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:33:34.848187  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:34.848206  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:33:34.868384  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:33:34.889303  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:33:34.915103  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:33:34.947695  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:33:34.970689  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:33:34.991429  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:33:35.015821  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:33:35.023417  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:33:35.033117  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.038047  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.038163  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.080117  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:33:35.088886  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:33:35.098283  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.103083  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.103169  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.146427  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:33:35.160483  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:33:35.172663  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.177994  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.178116  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.221220  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:33:35.236438  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:33:35.243682  324968 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:33:35.243736  324968 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1017 19:33:35.243840  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:33:35.243919  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:33:35.253526  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:33:35.253625  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1017 19:33:35.262623  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:33:35.276015  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:33:35.290622  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:33:35.294428  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:35.304725  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:35.455305  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:35.471222  324968 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1017 19:33:35.471611  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:35.476720  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:33:35.479857  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:35.599550  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:35.615050  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:33:35.615120  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:33:35.615344  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m04" to be "Ready" ...
	W1017 19:33:37.619036  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	W1017 19:33:39.619924  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	W1017 19:33:42.120954  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	I1017 19:33:42.619614  324968 node_ready.go:49] node "ha-254035-m04" is "Ready"
	I1017 19:33:42.619639  324968 node_ready.go:38] duration metric: took 7.004273155s for node "ha-254035-m04" to be "Ready" ...
	I1017 19:33:42.619652  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:33:42.619704  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:33:42.643671  324968 system_svc.go:56] duration metric: took 24.010635ms WaitForService to wait for kubelet
	I1017 19:33:42.643702  324968 kubeadm.go:586] duration metric: took 7.172435361s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:33:42.643720  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:33:42.658471  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658503  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658515  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658520  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658524  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658528  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658532  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658536  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658541  324968 node_conditions.go:105] duration metric: took 14.815335ms to run NodePressure ...
	I1017 19:33:42.658553  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:33:42.658578  324968 start.go:255] writing updated cluster config ...
	I1017 19:33:42.658896  324968 ssh_runner.go:195] Run: rm -f paused
	I1017 19:33:42.666036  324968 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:33:42.666578  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:33:42.748115  324968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gfklr" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.799614  324968 pod_ready.go:94] pod "coredns-66bc5c9577-gfklr" is "Ready"
	I1017 19:33:42.799652  324968 pod_ready.go:86] duration metric: took 51.505206ms for pod "coredns-66bc5c9577-gfklr" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.799662  324968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wbgc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.845846  324968 pod_ready.go:94] pod "coredns-66bc5c9577-wbgc8" is "Ready"
	I1017 19:33:42.845885  324968 pod_ready.go:86] duration metric: took 46.206115ms for pod "coredns-66bc5c9577-wbgc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.863051  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.871909  324968 pod_ready.go:94] pod "etcd-ha-254035" is "Ready"
	I1017 19:33:42.871935  324968 pod_ready.go:86] duration metric: took 8.855813ms for pod "etcd-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.871945  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.880198  324968 pod_ready.go:94] pod "etcd-ha-254035-m02" is "Ready"
	I1017 19:33:42.880226  324968 pod_ready.go:86] duration metric: took 8.274439ms for pod "etcd-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.880236  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.067322  324968 request.go:683] "Waited before sending request" delay="183.325668ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:43.071041  324968 pod_ready.go:94] pod "etcd-ha-254035-m03" is "Ready"
	I1017 19:33:43.071067  324968 pod_ready.go:86] duration metric: took 190.824595ms for pod "etcd-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.267504  324968 request.go:683] "Waited before sending request" delay="196.34087ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1017 19:33:43.271686  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.468020  324968 request.go:683] "Waited before sending request" delay="196.217403ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035"
	I1017 19:33:43.666979  324968 request.go:683] "Waited before sending request" delay="194.232504ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:43.670115  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035" is "Ready"
	I1017 19:33:43.670144  324968 pod_ready.go:86] duration metric: took 398.430494ms for pod "kube-apiserver-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.670153  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.867552  324968 request.go:683] "Waited before sending request" delay="197.322859ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035-m02"
	I1017 19:33:44.067901  324968 request.go:683] "Waited before sending request" delay="193.273769ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:44.071414  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035-m02" is "Ready"
	I1017 19:33:44.071442  324968 pod_ready.go:86] duration metric: took 401.282299ms for pod "kube-apiserver-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.071453  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.267920  324968 request.go:683] "Waited before sending request" delay="196.393406ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035-m03"
	I1017 19:33:44.467967  324968 request.go:683] "Waited before sending request" delay="196.317182ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:44.472041  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035-m03" is "Ready"
	I1017 19:33:44.472068  324968 pod_ready.go:86] duration metric: took 400.608635ms for pod "kube-apiserver-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.667472  324968 request.go:683] "Waited before sending request" delay="195.295893ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1017 19:33:44.671549  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.868014  324968 request.go:683] "Waited before sending request" delay="196.366601ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035"
	I1017 19:33:45.067086  324968 request.go:683] "Waited before sending request" delay="193.311224ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:45.072221  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035" is "Ready"
	I1017 19:33:45.072250  324968 pod_ready.go:86] duration metric: took 400.67411ms for pod "kube-controller-manager-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.072261  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.267682  324968 request.go:683] "Waited before sending request" delay="195.335416ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035-m02"
	I1017 19:33:45.467614  324968 request.go:683] "Waited before sending request" delay="188.393045ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:45.470975  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035-m02" is "Ready"
	I1017 19:33:45.471007  324968 pod_ready.go:86] duration metric: took 398.736291ms for pod "kube-controller-manager-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.471017  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.667358  324968 request.go:683] "Waited before sending request" delay="196.263104ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035-m03"
	I1017 19:33:45.867478  324968 request.go:683] "Waited before sending request" delay="196.63098ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:45.870372  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035-m03" is "Ready"
	I1017 19:33:45.870427  324968 pod_ready.go:86] duration metric: took 399.402071ms for pod "kube-controller-manager-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.067916  324968 request.go:683] "Waited before sending request" delay="197.353037ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1017 19:33:46.071965  324968 pod_ready.go:83] waiting for pod "kube-proxy-548b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.267426  324968 request.go:683] "Waited before sending request" delay="195.355338ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-548b2"
	I1017 19:33:46.467392  324968 request.go:683] "Waited before sending request" delay="193.351461ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:46.470716  324968 pod_ready.go:94] pod "kube-proxy-548b2" is "Ready"
	I1017 19:33:46.470745  324968 pod_ready.go:86] duration metric: took 398.750601ms for pod "kube-proxy-548b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.470755  324968 pod_ready.go:83] waiting for pod "kube-proxy-b4fr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.667046  324968 request.go:683] "Waited before sending request" delay="196.219848ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b4fr6"
	I1017 19:33:46.867280  324968 request.go:683] "Waited before sending request" delay="196.299896ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:46.870670  324968 pod_ready.go:94] pod "kube-proxy-b4fr6" is "Ready"
	I1017 19:33:46.870707  324968 pod_ready.go:86] duration metric: took 399.946057ms for pod "kube-proxy-b4fr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.870717  324968 pod_ready.go:83] waiting for pod "kube-proxy-fr5ts" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:47.067054  324968 request.go:683] "Waited before sending request" delay="196.240361ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fr5ts"
	I1017 19:33:47.267565  324968 request.go:683] "Waited before sending request" delay="196.190762ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:47.467316  324968 request.go:683] "Waited before sending request" delay="96.206992ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fr5ts"
	I1017 19:33:47.667564  324968 request.go:683] "Waited before sending request" delay="186.261475ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:48.067382  324968 request.go:683] "Waited before sending request" delay="186.267596ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:48.467049  324968 request.go:683] "Waited before sending request" delay="92.145258ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	W1017 19:33:48.877689  324968 pod_ready.go:104] pod "kube-proxy-fr5ts" is not "Ready", error: <nil>
	W1017 19:33:50.877808  324968 pod_ready.go:104] pod "kube-proxy-fr5ts" is not "Ready", error: <nil>
	I1017 19:33:52.377837  324968 pod_ready.go:94] pod "kube-proxy-fr5ts" is "Ready"
	I1017 19:33:52.377866  324968 pod_ready.go:86] duration metric: took 5.507143006s for pod "kube-proxy-fr5ts" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.377876  324968 pod_ready.go:83] waiting for pod "kube-proxy-k56cv" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.386625  324968 pod_ready.go:94] pod "kube-proxy-k56cv" is "Ready"
	I1017 19:33:52.386655  324968 pod_ready.go:86] duration metric: took 8.770737ms for pod "kube-proxy-k56cv" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.390245  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.467536  324968 request.go:683] "Waited before sending request" delay="77.200252ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035"
	I1017 19:33:52.667089  324968 request.go:683] "Waited before sending request" delay="193.299146ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:52.670454  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035" is "Ready"
	I1017 19:33:52.670484  324968 pod_ready.go:86] duration metric: took 280.216212ms for pod "kube-scheduler-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.670495  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.867921  324968 request.go:683] "Waited before sending request" delay="197.327438ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035-m02"
	I1017 19:33:53.067947  324968 request.go:683] "Waited before sending request" delay="195.176914ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:53.072896  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035-m02" is "Ready"
	I1017 19:33:53.072972  324968 pod_ready.go:86] duration metric: took 402.46965ms for pod "kube-scheduler-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.072997  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.267273  324968 request.go:683] "Waited before sending request" delay="194.142538ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035-m03"
	I1017 19:33:53.467118  324968 request.go:683] "Waited before sending request" delay="196.200739ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:53.470125  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035-m03" is "Ready"
	I1017 19:33:53.470152  324968 pod_ready.go:86] duration metric: took 397.132807ms for pod "kube-scheduler-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.470163  324968 pod_ready.go:40] duration metric: took 10.804092337s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:33:53.525625  324968 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 19:33:53.530847  324968 out.go:179] * Done! kubectl is now configured to use "ha-254035" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:33:01 ha-254035 crio[667]: time="2025-10-17T19:33:01.657638061Z" level=info msg="Started container" PID=1327 containerID=e9ece41337b80cfabb4196dc2d55dc644a949f49cd22450cf623b7f5257d5d69 description=kube-system/kindnet-gzzsg/kindnet-cni id=1467213a-df01-47f7-91a8-c9ecfa2692be name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe908ac1b77150ea99b48733349b105097380b5cd2e2f243156591744040d978
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.209485703Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.212893465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.212927827Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.21295117Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216661947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216697064Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216721523Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220161292Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220191347Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220215756Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.223221953Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.223254084Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:27 ha-254035 conmon[1135]: conmon 0cc2287088bc871e7f4d <ninfo>: container 1139 exited with status 1
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.068588792Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b7b509f3-b012-49ed-9e6d-e0ab750c4b6b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.07344856Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25fe3696-e90b-4a83-a3ad-33aa6af72f3d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.077367011Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=28e7f811-dec4-4fcb-9722-3a341888b632 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.077693042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.096972398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.097208428Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/17cd3234a8a982607354e16eb6b88983eecf7edea137eb96fbc8cd597e6577e2/merged/etc/passwd: no such file or directory"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.09724453Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/17cd3234a8a982607354e16eb6b88983eecf7edea137eb96fbc8cd597e6577e2/merged/etc/group: no such file or directory"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.108385903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.143116992Z" level=info msg="Created container f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e: kube-system/storage-provisioner/storage-provisioner" id=28e7f811-dec4-4fcb-9722-3a341888b632 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.144104625Z" level=info msg="Starting container: f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e" id=e482d8e9-fc6c-4e49-a1a6-8af83382da5d name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.153409034Z" level=info msg="Started container" PID=1450 containerID=f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e description=kube-system/storage-provisioner/storage-provisioner id=e482d8e9-fc6c-4e49-a1a6-8af83382da5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	f03a6dda4443a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   27 seconds ago       Running             storage-provisioner       4                   ebb6a1f53c483       storage-provisioner                 kube-system
	e9ece41337b80       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   54 seconds ago       Running             kindnet-cni               2                   fe908ac1b7715       kindnet-gzzsg                       kube-system
	83532ba0435f2       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   55 seconds ago       Running             busybox                   2                   0240e4c18c32a       busybox-7b57f96db7-nc6x2            default
	db8d02bae2fa1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   56 seconds ago       Running             coredns                   2                   507d7b819debe       coredns-66bc5c9577-wbgc8            kube-system
	706bee2267664       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   56 seconds ago       Running             coredns                   2                   c6367bcfd35d4       coredns-66bc5c9577-gfklr            kube-system
	d51ad27d42179       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   58 seconds ago       Running             kube-proxy                2                   7bb73f9365e64       kube-proxy-548b2                    kube-system
	0cc2287088bc8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   58 seconds ago       Exited              storage-provisioner       3                   ebb6a1f53c483       storage-provisioner                 kube-system
	cd9dec0514b24       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Running             kube-controller-manager   7                   251b6be3c0c4f       kube-controller-manager-ha-254035   kube-system
	d713edbb381bb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   6                   251b6be3c0c4f       kube-controller-manager-ha-254035   kube-system
	fb534fcdb2d89       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Running             kube-apiserver            3                   0fd33e0b5d3e5       kube-apiserver-ha-254035            kube-system
	ab6180a80f68d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Running             etcd                      2                   bc1edea2f668b       etcd-ha-254035                      kube-system
	c4609fc3fd1c0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Running             kube-scheduler            2                   32d4263a101a2       kube-scheduler-ha-254035            kube-system
	0652fd27f5bff       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   About a minute ago   Running             kube-vip                  1                   31afc78057fe9       kube-vip-ha-254035                  kube-system
	
	
	==> coredns [706bee22676646b717cd807f92b3341bc3bee9a22195d1a96f63858b9fe3f381] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35042 - 59078 "HINFO IN 7580743585985535806.8578026735020374478. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014332173s
	
	
	==> coredns [db8d02bae2fa1a6f368ea962e35a1111cb4230bcadf4709cf7545ace2d4272d6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35443 - 54421 "HINFO IN 8550404136984308969.4709042246801981974. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015029672s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-254035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_17_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:33:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-254035
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                eadb5c5f-dcbb-485c-aea7-3aa5b951fd9e
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-nc6x2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-gfklr             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 coredns-66bc5c9577-wbgc8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-ha-254035                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-gzzsg                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-254035             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-254035    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-548b2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-254035             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-254035                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 58s                  kube-proxy       
	  Normal   Starting                 7m57s                kube-proxy       
	  Normal   Starting                 15m                  kube-proxy       
	  Normal   Starting                 15m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     15m                  kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                  kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  15m                  kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 15m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           15m                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           15m                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeReady                15m                  kubelet          Node ha-254035 status is now: NodeReady
	  Normal   RegisteredNode           13m                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)    kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m25s                node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     103s (x8 over 103s)  kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           65s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           64s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           28s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	
	
	Name:               ha-254035-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_18_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:18:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:33:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-254035-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6c5e97e0-fa27-407d-a976-b646e8a40ca5
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6xjlp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-254035-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-vss98                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-254035-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-254035-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-b4fr6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-254035-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-254035-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 15m                  kube-proxy       
	  Normal   Starting                 37s                  kube-proxy       
	  Normal   Starting                 10m                  kube-proxy       
	  Normal   RegisteredNode           15m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           15m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Warning  CgroupV1                 11m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)    kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             10m                  node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           10m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           7m25s                node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   NodeNotReady             6m35s                node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   Starting                 100s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 100s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  100s (x8 over 100s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s (x8 over 100s)  kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s (x8 over 100s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           65s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           64s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           28s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	
	
	Name:               ha-254035-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_20_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:19:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:33:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:33:27 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:33:27 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:33:27 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:33:27 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-254035-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2f343c58-0cc9-444a-bc88-7799c3ff52df
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-979zm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-254035-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-2k9kj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-254035-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-254035-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-k56cv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-254035-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-254035-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13s                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   RegisteredNode           13m                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           7m25s              node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   NodeNotReady             6m35s              node-controller  Node ha-254035-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           65s                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           64s                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node ha-254035-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node ha-254035-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node ha-254035-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	
	
	Name:               ha-254035-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_21_16_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:21:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:33:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-254035-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                12691412-a8b5-426e-846e-d6161e527ea6
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pwhwv       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-proxy-fr5ts    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeReady                11m                kubelet          Node ha-254035-m04 status is now: NodeReady
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           7m25s              node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeNotReady             6m35s              node-controller  Node ha-254035-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           65s                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           64s                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           28s                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   Starting                 27s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 27s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  24s (x8 over 27s)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s (x8 over 27s)  kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24s (x8 over 27s)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +5.779853] overlayfs: idmapped layers are currently not supported
	[Oct17 18:34] overlayfs: idmapped layers are currently not supported
	[Oct17 18:35] overlayfs: idmapped layers are currently not supported
	[Oct17 18:36] overlayfs: idmapped layers are currently not supported
	[ +20.850590] overlayfs: idmapped layers are currently not supported
	[Oct17 18:38] overlayfs: idmapped layers are currently not supported
	[ +19.812679] overlayfs: idmapped layers are currently not supported
	[Oct17 18:39] overlayfs: idmapped layers are currently not supported
	[ +19.225178] overlayfs: idmapped layers are currently not supported
	[Oct17 18:40] overlayfs: idmapped layers are currently not supported
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 18:57] overlayfs: idmapped layers are currently not supported
	[Oct17 19:03] overlayfs: idmapped layers are currently not supported
	[Oct17 19:04] overlayfs: idmapped layers are currently not supported
	[Oct17 19:17] overlayfs: idmapped layers are currently not supported
	[Oct17 19:18] overlayfs: idmapped layers are currently not supported
	[Oct17 19:19] overlayfs: idmapped layers are currently not supported
	[Oct17 19:21] overlayfs: idmapped layers are currently not supported
	[Oct17 19:22] overlayfs: idmapped layers are currently not supported
	[Oct17 19:23] overlayfs: idmapped layers are currently not supported
	[  +4.119232] overlayfs: idmapped layers are currently not supported
	[Oct17 19:32] overlayfs: idmapped layers are currently not supported
	[  +2.727676] overlayfs: idmapped layers are currently not supported
	[ +41.644994] overlayfs: idmapped layers are currently not supported
	[Oct17 19:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab6180a80f68dcb65397cf72c97a3f14b4b536aa865a3b252a4a6ebf62d58b59] <==
	{"level":"info","ts":"2025-10-17T19:33:02.869380Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:02.912576Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"51e6bdeadc5ac63a","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-17T19:33:02.912744Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:03.092004Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:03.094998Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"warn","ts":"2025-10-17T19:33:03.846962Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:33:03.848354Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:33:03.904596Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"51e6bdeadc5ac63a","error":"failed to dial 51e6bdeadc5ac63a on stream MsgApp v2 (EOF)"}
	{"level":"warn","ts":"2025-10-17T19:33:04.073057Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"warn","ts":"2025-10-17T19:33:05.634743Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T19:33:05.634793Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T19:33:08.019198Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"warn","ts":"2025-10-17T19:33:09.636609Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T19:33:09.636666Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T19:33:13.638319Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T19:33:13.638379Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-10-17T19:33:15.389351Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"51e6bdeadc5ac63a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-17T19:33:15.389402Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:15.389416Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:15.389726Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"51e6bdeadc5ac63a","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-17T19:33:15.389754Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:15.432207Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:15.432664Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"warn","ts":"2025-10-17T19:33:56.466968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"215.192801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:367635"}
	{"level":"info","ts":"2025-10-17T19:33:56.467049Z","caller":"traceutil/trace.go:172","msg":"trace[1189122698] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:3372; }","duration":"215.291612ms","start":"2025-10-17T19:33:56.251745Z","end":"2025-10-17T19:33:56.467036Z","steps":["trace[1189122698] 'range keys from bolt db'  (duration: 214.220961ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:33:56 up  2:16,  0 user,  load average: 4.82, 2.56, 1.74
	Linux ha-254035 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9ece41337b80cfabb4196dc2d55dc644a949f49cd22450cf623b7f5257d5d69] <==
	I1017 19:33:22.208940       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:33:32.207686       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:33:32.207737       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:33:32.207909       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:33:32.207918       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:33:32.208237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:33:32.208272       1 main.go:301] handling current node
	I1017 19:33:32.208285       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:33:32.208290       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:33:42.232363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:33:42.232440       1 main.go:301] handling current node
	I1017 19:33:42.232462       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:33:42.232470       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:33:42.232739       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:33:42.232776       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:33:42.232873       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:33:42.232890       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:33:52.206912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:33:52.206964       1 main.go:301] handling current node
	I1017 19:33:52.206980       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:33:52.206986       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:33:52.207125       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:33:52.207150       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:33:52.207205       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:33:52.207215       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fb534fcdb2d895a4c9c908d2c41c5a3a49e1ba7a9a8c54cca3e0f68236d86194] <==
	{"level":"warn","ts":"2025-10-17T19:32:45.556106Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001deba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-17T19:32:45.556124Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028872c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1017 19:32:45.742745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:32:45.761612       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:32:45.766614       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 19:32:45.766727       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:32:45.766874       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 19:32:45.766889       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:32:45.772156       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:32:45.782338       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:32:45.782660       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:32:45.782735       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:32:45.786264       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:32:45.801116       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:32:45.801154       1 policy_source.go:240] refreshing policies
	I1017 19:32:45.801215       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:32:45.801340       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:32:45.823912       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1017 19:32:45.892067       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:32:46.104708       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:32:51.664034       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:32:51.782010       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:32:51.908184       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:32:52.058599       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:32:52.107924       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [cd9dec0514b2422e9e0e06a464213e0f38cdfce11c6ca20c97c479d028fcac71] <==
	I1017 19:32:51.689156       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 19:32:51.696612       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 19:32:51.700277       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:32:51.702304       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:32:51.702337       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:32:51.702705       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:32:51.703169       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 19:32:51.704899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:32:51.705461       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:32:51.705774       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:32:51.705860       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:32:51.707308       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:32:51.708143       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:32:51.708196       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:32:51.713230       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:32:51.722295       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:32:51.793811       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m04"
	I1017 19:32:51.793885       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035"
	I1017 19:32:51.793911       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m02"
	I1017 19:32:51.793948       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m03"
	I1017 19:32:51.794411       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1017 19:32:56.794689       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 19:33:32.102831       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-m4bp9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-m4bp9\": the object has been modified; please apply your changes to the latest version and try again"
	I1017 19:33:32.116286       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9bc45666-7349-43f1-b1bc-8fe50797293b", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-m4bp9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-m4bp9": the object has been modified; please apply your changes to the latest version and try again
	I1017 19:33:42.572582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	
	
	==> kube-controller-manager [d713edbb381bb7ac4baa67d925ebd85ec5ab61fa9319db2f03ba47d667e26940] <==
	I1017 19:32:15.577934       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:32:17.585378       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 19:32:17.585478       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:32:17.587388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 19:32:17.588088       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 19:32:17.588254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:32:17.588373       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1017 19:32:32.131519       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [d51ad27d42179adee09ff705d12ad5d15a734809e4732ad3eb1c4429dc7021e6] <==
	I1017 19:32:57.743934       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:32:57.902619       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:32:57.934204       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:32:57.934232       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:32:57.934302       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:32:58.002595       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:32:58.002661       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:32:58.008742       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:32:58.009306       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:32:58.009381       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:32:58.011974       1 config.go:200] "Starting service config controller"
	I1017 19:32:58.011999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:32:58.021529       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:32:58.021612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:32:58.021667       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:32:58.021695       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:32:58.021970       1 config.go:309] "Starting node config controller"
	I1017 19:32:58.021993       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:32:58.112358       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:32:58.122792       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:32:58.122780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:32:58.122830       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [c4609fc3fd1c0d5440395e0986380eb9eb076a0e1e1faa4ad132e67cd913032d] <==
	E1017 19:32:31.771659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:32:31.797116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:32:31.896832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:32:32.064844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:32:32.932569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:32:37.169100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:32:37.846495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:32:38.099427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:32:38.270033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:32:38.487027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:32:38.599190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:32:38.651417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 19:32:38.767857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:32:39.359080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:32:39.794118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:32:40.174663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:32:40.365511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:32:41.236604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:32:41.734978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:32:41.750769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:32:41.960587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:32:42.287351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:32:42.388652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:32:42.941963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1017 19:33:04.097110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.424411     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.424463     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.425112     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238 WatchSource:0}: Error finding container ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238: Status 404 returned error can't find the container with id ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.428343     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(4784cc20-6df7-4e32-bbfa-e0b3be4a1e83): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.428384     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="4784cc20-6df7-4e32-bbfa-e0b3be4a1e83"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.433597     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6 WatchSource:0}: Error finding container 507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6: Status 404 returned error can't find the container with id 507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.441352     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.441397     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.442165     802 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-254035\" already exists" pod="kube-system/kube-scheduler-ha-254035"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.458234     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673 WatchSource:0}: Error finding container 0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673: Status 404 returned error can't find the container with id 0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.468716     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-nc6x2_default(4ced2553-3c5f-4d67-ad3c-2ed34ab319ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.468759     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-nc6x2" podUID="4ced2553-3c5f-4d67-ad3c-2ed34ab319ef"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.722833     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-nc6x2_default(4ced2553-3c5f-4d67-ad3c-2ed34ab319ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.741101     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-nc6x2" podUID="4ced2553-3c5f-4d67-ad3c-2ed34ab319ef"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.749534     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-gfklr_kube-system(8bf2b43b-91c9-4531-a571-36060412860e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755626     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-gfklr" podUID="8bf2b43b-91c9-4531-a571-36060412860e"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755218     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(4784cc20-6df7-4e32-bbfa-e0b3be4a1e83): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755307     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755390     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-548b2_kube-system(4b772887-90df-4871-9343-69349bdda859): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755118     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757120     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757234     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757252     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="4784cc20-6df7-4e32-bbfa-e0b3be4a1e83"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757271     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-548b2" podUID="4b772887-90df-4871-9343-69349bdda859"
	Oct 17 19:33:28 ha-254035 kubelet[802]: I1017 19:33:28.066788     802 scope.go:117] "RemoveContainer" containerID="0cc2287088bc871e7f4dd5ef5a425a95862343c93ae9b170eadd77d685735b39"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-254035 -n ha-254035
helpers_test.go:269: (dbg) Run:  kubectl --context ha-254035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (112.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.086492929s)
ha_test.go:415: expected profile "ha-254035" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-254035\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-254035\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesR
oot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-254035\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name
\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-dev
ice-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\"
:false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-254035
helpers_test.go:243: (dbg) docker inspect ha-254035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	        "Created": "2025-10-17T19:17:36.603472481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 325091,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:32:05.992149801Z",
	            "FinishedAt": "2025-10-17T19:32:05.172940124Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hosts",
	        "LogPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8-json.log",
	        "Name": "/ha-254035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-254035:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-254035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	                "LowerDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-254035",
	                "Source": "/var/lib/docker/volumes/ha-254035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-254035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-254035",
	                "name.minikube.sigs.k8s.io": "ha-254035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1b39170e4096374d7e684a87814d212baad95e741e4cc807dce61f43c877747",
	            "SandboxKey": "/var/run/docker/netns/b1b39170e409",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-254035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:e2:15:6d:bc:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f667d9c3ea201faa6573d33bffc4907012785051d424eb86a31b1e09eb8b135",
	                    "EndpointID": "e9462a0e2e3d7837432ea03485390bfaae7ae9afbbbbc20020bc0ae2782b8ba7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-254035",
	                        "7f770318d5dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-254035 -n ha-254035
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 logs -n 25: (1.930955134s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-254035 cp ha-254035-m03:/home/docker/cp-test.txt ha-254035-m04:/home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp testdata/cp-test.txt ha-254035-m04:/home/docker/cp-test.txt                                                             │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035-m04.txt │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035:/home/docker/cp-test_ha-254035-m04_ha-254035.txt                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035.txt                                                 │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m02 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m03:/home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node start m02 --alsologtostderr -v 5                                                                                      │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:23 UTC │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │ 17 Oct 25 19:23 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5                                                                                   │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	│ node    │ ha-254035 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │ 17 Oct 25 19:32 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:32 UTC │ 17 Oct 25 19:33 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:32:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:32:05.731928  324968 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:32:05.732103  324968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:32:05.732132  324968 out.go:374] Setting ErrFile to fd 2...
	I1017 19:32:05.732151  324968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:32:05.732432  324968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:32:05.732853  324968 out.go:368] Setting JSON to false
	I1017 19:32:05.733704  324968 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8077,"bootTime":1760721449,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:32:05.733797  324968 start.go:141] virtualization:  
	I1017 19:32:05.736996  324968 out.go:179] * [ha-254035] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:32:05.740976  324968 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:32:05.741039  324968 notify.go:220] Checking for updates...
	I1017 19:32:05.746791  324968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:32:05.749627  324968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:05.752435  324968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:32:05.755486  324968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:32:05.758645  324968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:32:05.762073  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:05.762786  324968 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:32:05.783133  324968 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:32:05.783261  324968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:32:05.840860  324968 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:32:05.83134404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:32:05.840970  324968 docker.go:318] overlay module found
	I1017 19:32:05.844001  324968 out.go:179] * Using the docker driver based on existing profile
	I1017 19:32:05.846818  324968 start.go:305] selected driver: docker
	I1017 19:32:05.846835  324968 start.go:925] validating driver "docker" against &{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:05.846996  324968 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:32:05.847094  324968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:32:05.907256  324968 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:32:05.898245791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:32:05.907667  324968 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:32:05.907704  324968 cni.go:84] Creating CNI manager for ""
	I1017 19:32:05.907768  324968 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:32:05.907825  324968 start.go:349] cluster config:
	{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:05.911004  324968 out.go:179] * Starting "ha-254035" primary control-plane node in "ha-254035" cluster
	I1017 19:32:05.913729  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:05.916410  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:05.919155  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:05.919202  324968 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 19:32:05.919216  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:05.919268  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:05.919311  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:05.919321  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:05.919466  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:05.938132  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:05.938154  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:05.938173  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:05.938195  324968 start.go:360] acquireMachinesLock for ha-254035: {Name:mka2e39989b9cf6078778e7f6519885462ea711f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:05.938260  324968 start.go:364] duration metric: took 36.741µs to acquireMachinesLock for "ha-254035"
	I1017 19:32:05.938292  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:05.938311  324968 fix.go:54] fixHost starting: 
	I1017 19:32:05.938563  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:05.955500  324968 fix.go:112] recreateIfNeeded on ha-254035: state=Stopped err=<nil>
	W1017 19:32:05.955532  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:05.958901  324968 out.go:252] * Restarting existing docker container for "ha-254035" ...
	I1017 19:32:05.958986  324968 cli_runner.go:164] Run: docker start ha-254035
	I1017 19:32:06.223945  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:06.246991  324968 kic.go:430] container "ha-254035" state is running.
	I1017 19:32:06.247441  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:06.267236  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:06.267478  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:06.267538  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:06.286531  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:06.287650  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:06.287670  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:06.288401  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:09.440064  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:32:09.440099  324968 ubuntu.go:182] provisioning hostname "ha-254035"
	I1017 19:32:09.440162  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:09.457351  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:09.457659  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:09.457674  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035 && echo "ha-254035" | sudo tee /etc/hostname
	I1017 19:32:09.613626  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:32:09.613711  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:09.630718  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:09.631029  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:09.631045  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:09.780773  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:09.780802  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:09.780820  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:09.780831  324968 provision.go:84] configureAuth start
	I1017 19:32:09.780894  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:09.801074  324968 provision.go:143] copyHostCerts
	I1017 19:32:09.801116  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:09.801147  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:09.801165  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:09.801244  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:09.801333  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:09.801350  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:09.801354  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:09.801381  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:09.801427  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:09.801450  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:09.801455  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:09.801479  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:09.801528  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035 san=[127.0.0.1 192.168.49.2 ha-254035 localhost minikube]
	I1017 19:32:10.886077  324968 provision.go:177] copyRemoteCerts
	I1017 19:32:10.886156  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:32:10.886202  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:10.904681  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.010120  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:32:11.010211  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:32:11.028108  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:32:11.028165  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1017 19:32:11.044982  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:32:11.045040  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:32:11.061816  324968 provision.go:87] duration metric: took 1.280961553s to configureAuth
	I1017 19:32:11.061844  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:32:11.062085  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:11.062193  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.080891  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:11.081208  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:11.081230  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:32:11.407184  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:32:11.407205  324968 machine.go:96] duration metric: took 5.139717317s to provisionDockerMachine
	I1017 19:32:11.407216  324968 start.go:293] postStartSetup for "ha-254035" (driver="docker")
	I1017 19:32:11.407226  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:32:11.407298  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:32:11.407335  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.427760  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.532299  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:32:11.535879  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:32:11.535910  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:32:11.535921  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:32:11.535995  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:32:11.536114  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:32:11.536128  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:32:11.536253  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:32:11.544245  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:11.561441  324968 start.go:296] duration metric: took 154.210245ms for postStartSetup
	I1017 19:32:11.561521  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:32:11.561565  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.578819  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.677440  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:32:11.681988  324968 fix.go:56] duration metric: took 5.74367054s for fixHost
	I1017 19:32:11.682016  324968 start.go:83] releasing machines lock for "ha-254035", held for 5.743742202s
	I1017 19:32:11.682098  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:11.699528  324968 ssh_runner.go:195] Run: cat /version.json
	I1017 19:32:11.699564  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:32:11.699581  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.699635  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.717585  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.718770  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.820235  324968 ssh_runner.go:195] Run: systemctl --version
	I1017 19:32:11.912550  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:32:11.950130  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:32:11.954364  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:32:11.954441  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:32:11.961885  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:32:11.961962  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:32:11.962000  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:32:11.962067  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:32:11.977362  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:32:11.990093  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:32:11.990161  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:32:12.005596  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:32:12.028034  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:32:12.152900  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:32:12.266767  324968 docker.go:234] disabling docker service ...
	I1017 19:32:12.266872  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:32:12.281703  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:32:12.294628  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:32:12.407632  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:32:12.520465  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:32:12.533571  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:32:12.547072  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:32:12.547164  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.555749  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:32:12.555816  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.564895  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.574036  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.582944  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:32:12.591372  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.600416  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.609166  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.618096  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:32:12.625617  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:32:12.633309  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:12.745158  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:32:12.879102  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:32:12.879171  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:32:12.883018  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:32:12.883079  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:32:12.886642  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:32:12.910860  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:32:12.910959  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:12.937450  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:12.969308  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:32:12.971996  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:32:12.987690  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:32:12.991595  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:13.001105  324968 kubeadm.go:883] updating cluster {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:32:13.001261  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:13.001318  324968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:32:13.038776  324968 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:32:13.038803  324968 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:32:13.038896  324968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:32:13.068706  324968 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:32:13.068731  324968 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:32:13.068740  324968 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:32:13.068844  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:32:13.068920  324968 ssh_runner.go:195] Run: crio config
	I1017 19:32:13.128454  324968 cni.go:84] Creating CNI manager for ""
	I1017 19:32:13.128483  324968 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:32:13.128514  324968 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:32:13.128575  324968 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-254035 NodeName:ha-254035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:32:13.128708  324968 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-254035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:32:13.128729  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:32:13.128779  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:32:13.140710  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:13.140824  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:32:13.140891  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:32:13.148269  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:32:13.148357  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 19:32:13.156108  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 19:32:13.168572  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:32:13.181432  324968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 19:32:13.193977  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:32:13.207012  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:32:13.210795  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:13.220459  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:13.334243  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:13.350459  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.2
	I1017 19:32:13.350480  324968 certs.go:195] generating shared ca certs ...
	I1017 19:32:13.350496  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:13.350630  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:32:13.350673  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:32:13.350681  324968 certs.go:257] generating profile certs ...
	I1017 19:32:13.350760  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:32:13.350837  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea
	I1017 19:32:13.350876  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:32:13.350885  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:32:13.350898  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:32:13.350908  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:32:13.350918  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:32:13.350928  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:32:13.350941  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:32:13.350951  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:32:13.350962  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:32:13.351012  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:32:13.351041  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:32:13.351048  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:32:13.351070  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:32:13.351095  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:32:13.351117  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:32:13.351161  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:13.351191  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.351207  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.351219  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.351856  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:32:13.375776  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:32:13.394623  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:32:13.413878  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:32:13.434296  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:32:13.456687  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:32:13.484245  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:32:13.505393  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:32:13.528512  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:32:13.550651  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:32:13.581215  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:32:13.601377  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:32:13.617352  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:32:13.624146  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:32:13.633165  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.637212  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.637279  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.680086  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:32:13.689010  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:32:13.698044  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.701888  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.701957  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.744236  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:32:13.752213  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:32:13.760295  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.764256  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.764320  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.806422  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:32:13.814023  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:32:13.817664  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:32:13.858251  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:32:13.899329  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:32:13.940348  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:32:13.981700  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:32:14.022967  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:32:14.071872  324968 kubeadm.go:400] StartCluster: {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:14.072073  324968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:32:14.072171  324968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:32:14.159623  324968 cri.go:89] found id: "0652fd27f5bff0f3d194b39abbb92602f049204bb45577d9a403537b5949c8cc"
	I1017 19:32:14.159695  324968 cri.go:89] found id: ""
	I1017 19:32:14.159788  324968 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:32:14.178262  324968 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:32:14Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:32:14.178424  324968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:32:14.193618  324968 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:32:14.193677  324968 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:32:14.193771  324968 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:32:14.214880  324968 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:14.215386  324968 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-254035" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:14.215555  324968 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-254035" cluster setting kubeconfig missing "ha-254035" context setting]
	I1017 19:32:14.215920  324968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.216577  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:32:14.217294  324968 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 19:32:14.217346  324968 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 19:32:14.217362  324968 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 19:32:14.217367  324968 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 19:32:14.217427  324968 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 19:32:14.217452  324968 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 19:32:14.217940  324968 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:32:14.232358  324968 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 19:32:14.232432  324968 kubeadm.go:601] duration metric: took 38.716713ms to restartPrimaryControlPlane
	I1017 19:32:14.232455  324968 kubeadm.go:402] duration metric: took 160.594092ms to StartCluster
	I1017 19:32:14.232498  324968 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.232662  324968 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:14.233403  324968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.233677  324968 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:32:14.233733  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:32:14.233763  324968 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:32:14.234454  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:14.239733  324968 out.go:179] * Enabled addons: 
	I1017 19:32:14.243909  324968 addons.go:514] duration metric: took 10.136788ms for enable addons: enabled=[]
	I1017 19:32:14.243996  324968 start.go:246] waiting for cluster config update ...
	I1017 19:32:14.244021  324968 start.go:255] writing updated cluster config ...
	I1017 19:32:14.247787  324968 out.go:203] 
	I1017 19:32:14.251318  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:14.251508  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.254862  324968 out.go:179] * Starting "ha-254035-m02" control-plane node in "ha-254035" cluster
	I1017 19:32:14.258139  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:14.261425  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:14.264451  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:14.264576  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:14.264510  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:14.264972  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:14.265018  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:14.265234  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.286925  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:14.286943  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:14.286955  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:14.286977  324968 start.go:360] acquireMachinesLock for ha-254035-m02: {Name:mkcf59557cfb2c18712510006a9b88f53e9d8916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:14.287029  324968 start.go:364] duration metric: took 36.003µs to acquireMachinesLock for "ha-254035-m02"
	I1017 19:32:14.287048  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:14.287054  324968 fix.go:54] fixHost starting: m02
	I1017 19:32:14.287335  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:32:14.308380  324968 fix.go:112] recreateIfNeeded on ha-254035-m02: state=Stopped err=<nil>
	W1017 19:32:14.308406  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:14.312007  324968 out.go:252] * Restarting existing docker container for "ha-254035-m02" ...
	I1017 19:32:14.312096  324968 cli_runner.go:164] Run: docker start ha-254035-m02
	I1017 19:32:14.710881  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:32:14.738971  324968 kic.go:430] container "ha-254035-m02" state is running.
	I1017 19:32:14.739337  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:14.764764  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.765007  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:14.765074  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:14.794957  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:14.795271  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:14.795287  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:14.795888  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:17.992435  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:32:17.992457  324968 ubuntu.go:182] provisioning hostname "ha-254035-m02"
	I1017 19:32:17.992541  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:18.030394  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:18.030717  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:18.030730  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m02 && echo "ha-254035-m02" | sudo tee /etc/hostname
	I1017 19:32:18.238178  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:32:18.238358  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:18.269009  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:18.269312  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:18.269330  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:18.453189  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:18.453217  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:18.453238  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:18.453248  324968 provision.go:84] configureAuth start
	I1017 19:32:18.453312  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:18.494134  324968 provision.go:143] copyHostCerts
	I1017 19:32:18.494179  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:18.494213  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:18.494225  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:18.494315  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:18.494442  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:18.494469  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:18.494479  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:18.494510  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:18.494560  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:18.494584  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:18.494592  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:18.494620  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:18.494675  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m02 san=[127.0.0.1 192.168.49.3 ha-254035-m02 localhost minikube]
	I1017 19:32:19.339690  324968 provision.go:177] copyRemoteCerts
	I1017 19:32:19.339761  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:32:19.339805  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:19.360710  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:19.488967  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:32:19.489032  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:32:19.531594  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:32:19.531655  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:32:19.572626  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:32:19.572693  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:32:19.617410  324968 provision.go:87] duration metric: took 1.16414737s to configureAuth
	I1017 19:32:19.617479  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:32:19.617739  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:19.617872  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:19.658286  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:19.658598  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:19.658613  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:32:20.717397  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:32:20.717469  324968 machine.go:96] duration metric: took 5.952443469s to provisionDockerMachine
	I1017 19:32:20.717493  324968 start.go:293] postStartSetup for "ha-254035-m02" (driver="docker")
	I1017 19:32:20.717527  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:32:20.717636  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:32:20.717717  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:20.738048  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:20.853074  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:32:20.857246  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:32:20.857278  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:32:20.857289  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:32:20.857346  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:32:20.857423  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:32:20.857437  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:32:20.857537  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:32:20.866006  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:20.886225  324968 start.go:296] duration metric: took 168.70092ms for postStartSetup
	I1017 19:32:20.886334  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:32:20.886398  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:20.912756  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.034286  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:32:21.042383  324968 fix.go:56] duration metric: took 6.755322442s for fixHost
	I1017 19:32:21.042417  324968 start.go:83] releasing machines lock for "ha-254035-m02", held for 6.755380378s
	I1017 19:32:21.042509  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:21.067009  324968 out.go:179] * Found network options:
	I1017 19:32:21.069796  324968 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 19:32:21.072617  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:32:21.072667  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:32:21.072737  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:32:21.072783  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:21.072798  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:32:21.072853  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:21.106980  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.116734  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.321123  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:32:21.398151  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:32:21.398260  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:32:21.429985  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:32:21.430019  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:32:21.430052  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:32:21.430120  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:32:21.469545  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:32:21.499838  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:32:21.499915  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:32:21.546298  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:32:21.574508  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:32:22.043397  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:32:22.346332  324968 docker.go:234] disabling docker service ...
	I1017 19:32:22.346414  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:32:22.366415  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:32:22.385363  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:32:22.610088  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:32:22.882540  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:32:22.898584  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:32:22.925839  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:32:22.925982  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.941214  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:32:22.941380  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.952790  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.964392  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.976274  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:32:22.986631  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.999122  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:23.017402  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:23.031048  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:32:23.041313  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:32:23.054658  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:23.287821  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:32:23.539139  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:32:23.539262  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:32:23.543731  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:32:23.543842  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:32:23.550732  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:32:23.592317  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:32:23.592405  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:23.642337  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:23.710060  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:32:23.713120  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:32:23.716299  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:32:23.744818  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:32:23.750008  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:23.771597  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:32:23.771839  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:23.772139  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:23.805838  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:32:23.806449  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.3
	I1017 19:32:23.806468  324968 certs.go:195] generating shared ca certs ...
	I1017 19:32:23.806508  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:23.809795  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:32:23.809866  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:32:23.809883  324968 certs.go:257] generating profile certs ...
	I1017 19:32:23.809976  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:32:23.810032  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.5a836dc6
	I1017 19:32:23.810076  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:32:23.810089  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:32:23.810105  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:32:23.810121  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:32:23.810138  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:32:23.810155  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:32:23.810173  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:32:23.810185  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:32:23.810197  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:32:23.810249  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:32:23.810281  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:32:23.810294  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:32:23.810326  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:32:23.810354  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:32:23.810380  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:32:23.810425  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:23.810467  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:23.810484  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:32:23.810495  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:32:23.810560  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:23.830858  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:23.928800  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:32:23.933176  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:32:23.948803  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:32:23.953564  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:32:23.963833  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:32:23.970797  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:32:23.980707  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:32:23.985094  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:32:23.994719  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:32:23.998983  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:32:24.010610  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:32:24.015549  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:32:24.026675  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:32:24.046169  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:32:24.065010  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:32:24.083555  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:32:24.101835  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:32:24.121645  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:32:24.140364  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:32:24.158250  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:32:24.175078  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:32:24.192107  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:32:24.210093  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:32:24.227779  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:32:24.240287  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:32:24.253704  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:32:24.268887  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:32:24.281554  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:32:24.294030  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:32:24.307056  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:32:24.319713  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:32:24.326454  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:32:24.334896  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.338984  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.339069  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.382244  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:32:24.389973  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:32:24.397963  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.402178  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.402260  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.445450  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:32:24.454057  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:32:24.462416  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.469188  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.469265  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.513771  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:32:24.526391  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:32:24.532093  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:32:24.577438  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:32:24.619730  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:32:24.661938  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:32:24.706695  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:32:24.750711  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:32:24.792693  324968 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 19:32:24.792815  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:32:24.792847  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:32:24.792907  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:32:24.805902  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:24.805963  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:32:24.806034  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:32:24.815558  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:32:24.815637  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:32:24.823591  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:32:24.837169  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:32:24.849790  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:32:24.870243  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:32:24.879498  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:24.891396  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:25.079299  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:25.098478  324968 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:32:25.098820  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:25.104996  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:32:25.107746  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:25.272984  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:25.289585  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:32:25.289670  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:32:25.289939  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m02" to be "Ready" ...
	W1017 19:32:45.698726  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:47.846677  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:50.300191  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:52.794234  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	I1017 19:32:55.298996  324968 node_ready.go:49] node "ha-254035-m02" is "Ready"
	I1017 19:32:55.299027  324968 node_ready.go:38] duration metric: took 30.009056285s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:32:55.299042  324968 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:32:55.299101  324968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:32:55.311396  324968 api_server.go:72] duration metric: took 30.212852853s to wait for apiserver process to appear ...
	I1017 19:32:55.311421  324968 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:32:55.311440  324968 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:32:55.321736  324968 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:32:55.323225  324968 api_server.go:141] control plane version: v1.34.1
	I1017 19:32:55.323289  324968 api_server.go:131] duration metric: took 11.860591ms to wait for apiserver health ...
	I1017 19:32:55.323326  324968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:32:55.332734  324968 system_pods.go:59] 26 kube-system pods found
	I1017 19:32:55.332788  324968 system_pods.go:61] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.332797  324968 system_pods.go:61] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.332809  324968 system_pods.go:61] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:32:55.332819  324968 system_pods.go:61] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:32:55.332827  324968 system_pods.go:61] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:32:55.332832  324968 system_pods.go:61] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:32:55.332845  324968 system_pods.go:61] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.332850  324968 system_pods.go:61] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:32:55.332863  324968 system_pods.go:61] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.332872  324968 system_pods.go:61] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:32:55.332881  324968 system_pods.go:61] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:32:55.332886  324968 system_pods.go:61] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:32:55.332893  324968 system_pods.go:61] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.332905  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.332913  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:32:55.332921  324968 system_pods.go:61] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.332931  324968 system_pods.go:61] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.332936  324968 system_pods.go:61] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:32:55.332941  324968 system_pods.go:61] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:32:55.332953  324968 system_pods.go:61] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.332964  324968 system_pods.go:61] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.332973  324968 system_pods.go:61] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:32:55.332981  324968 system_pods.go:61] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:32:55.332985  324968 system_pods.go:61] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:32:55.332989  324968 system_pods.go:61] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:32:55.333000  324968 system_pods.go:61] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:32:55.333009  324968 system_pods.go:74] duration metric: took 9.659246ms to wait for pod list to return data ...
	I1017 19:32:55.333022  324968 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:32:55.344111  324968 default_sa.go:45] found service account: "default"
	I1017 19:32:55.344138  324968 default_sa.go:55] duration metric: took 11.10916ms for default service account to be created ...
	I1017 19:32:55.344149  324968 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:32:55.351885  324968 system_pods.go:86] 26 kube-system pods found
	I1017 19:32:55.351922  324968 system_pods.go:89] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.351933  324968 system_pods.go:89] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.351940  324968 system_pods.go:89] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:32:55.351947  324968 system_pods.go:89] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:32:55.351952  324968 system_pods.go:89] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:32:55.351957  324968 system_pods.go:89] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:32:55.351966  324968 system_pods.go:89] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.351971  324968 system_pods.go:89] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:32:55.351986  324968 system_pods.go:89] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.351997  324968 system_pods.go:89] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:32:55.352003  324968 system_pods.go:89] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:32:55.352010  324968 system_pods.go:89] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:32:55.352019  324968 system_pods.go:89] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.352031  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.352036  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:32:55.352043  324968 system_pods.go:89] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.352051  324968 system_pods.go:89] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.352056  324968 system_pods.go:89] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:32:55.352062  324968 system_pods.go:89] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:32:55.352068  324968 system_pods.go:89] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.352086  324968 system_pods.go:89] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.352091  324968 system_pods.go:89] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:32:55.352096  324968 system_pods.go:89] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:32:55.352100  324968 system_pods.go:89] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:32:55.352108  324968 system_pods.go:89] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:32:55.352116  324968 system_pods.go:89] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:32:55.352123  324968 system_pods.go:126] duration metric: took 7.969634ms to wait for k8s-apps to be running ...
	I1017 19:32:55.352135  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:32:55.352192  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:32:55.367145  324968 system_svc.go:56] duration metric: took 14.999806ms WaitForService to wait for kubelet
	I1017 19:32:55.367171  324968 kubeadm.go:586] duration metric: took 30.268632021s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:32:55.367192  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:32:55.370727  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370762  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370773  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370778  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370782  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370786  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370790  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370793  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370798  324968 node_conditions.go:105] duration metric: took 3.600536ms to run NodePressure ...
	I1017 19:32:55.370811  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:32:55.370845  324968 start.go:255] writing updated cluster config ...
	I1017 19:32:55.374424  324968 out.go:203] 
	I1017 19:32:55.377636  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:55.377758  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.381262  324968 out.go:179] * Starting "ha-254035-m03" control-plane node in "ha-254035" cluster
	I1017 19:32:55.385137  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:55.388169  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:55.391014  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:55.391065  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:55.391130  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:55.391213  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:55.391250  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:55.391408  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.410277  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:55.410300  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:55.410323  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:55.410347  324968 start.go:360] acquireMachinesLock for ha-254035-m03: {Name:mked9f1e3aab9db3df3b59f9799fd7eb1b9dc756 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:55.410421  324968 start.go:364] duration metric: took 54.473µs to acquireMachinesLock for "ha-254035-m03"
	I1017 19:32:55.410445  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:55.410454  324968 fix.go:54] fixHost starting: m03
	I1017 19:32:55.410732  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:32:55.427703  324968 fix.go:112] recreateIfNeeded on ha-254035-m03: state=Stopped err=<nil>
	W1017 19:32:55.427730  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:55.431363  324968 out.go:252] * Restarting existing docker container for "ha-254035-m03" ...
	I1017 19:32:55.431457  324968 cli_runner.go:164] Run: docker start ha-254035-m03
	I1017 19:32:55.755807  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:32:55.777127  324968 kic.go:430] container "ha-254035-m03" state is running.
	I1017 19:32:55.777489  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:32:55.800244  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.800494  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:55.800582  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:55.829783  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:55.830097  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:55.830107  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:55.830700  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:59.026446  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m03
	
	I1017 19:32:59.026469  324968 ubuntu.go:182] provisioning hostname "ha-254035-m03"
	I1017 19:32:59.026531  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:59.057027  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:59.057341  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:59.057359  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m03 && echo "ha-254035-m03" | sudo tee /etc/hostname
	I1017 19:32:59.282090  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m03
	
	I1017 19:32:59.282168  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:59.325073  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:59.325398  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:59.325420  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:59.509111  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:59.509181  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:59.509265  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:59.509297  324968 provision.go:84] configureAuth start
	I1017 19:32:59.509400  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:32:59.548783  324968 provision.go:143] copyHostCerts
	I1017 19:32:59.548834  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:59.548871  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:59.548878  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:59.548957  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:59.549040  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:59.549072  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:59.549078  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:59.549106  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:59.549151  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:59.549168  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:59.549172  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:59.549195  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:59.549242  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m03 san=[127.0.0.1 192.168.49.4 ha-254035-m03 localhost minikube]
	I1017 19:33:00.043691  324968 provision.go:177] copyRemoteCerts
	I1017 19:33:00.043871  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:33:00.043944  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.064471  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:00.223369  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:33:00.223446  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:33:00.260611  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:33:00.260683  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:33:00.317143  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:33:00.317306  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:33:00.385743  324968 provision.go:87] duration metric: took 876.417393ms to configureAuth
	I1017 19:33:00.385819  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:33:00.386115  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:00.386276  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.432179  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:00.432495  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:33:00.432512  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:33:00.901503  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:33:00.901591  324968 machine.go:96] duration metric: took 5.101084009s to provisionDockerMachine
	I1017 19:33:00.901618  324968 start.go:293] postStartSetup for "ha-254035-m03" (driver="docker")
	I1017 19:33:00.901662  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:33:00.901753  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:33:00.901835  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.927269  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.051646  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:33:01.055666  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:33:01.055692  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:33:01.055704  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:33:01.055763  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:33:01.055854  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:33:01.055866  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:33:01.055965  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:33:01.066853  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:01.101261  324968 start.go:296] duration metric: took 199.597831ms for postStartSetup
	I1017 19:33:01.101355  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:33:01.101408  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.130630  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.323449  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:33:01.379781  324968 fix.go:56] duration metric: took 5.969318931s for fixHost
	I1017 19:33:01.379809  324968 start.go:83] releasing machines lock for "ha-254035-m03", held for 5.969375603s
	I1017 19:33:01.379881  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:33:01.416934  324968 out.go:179] * Found network options:
	I1017 19:33:01.419424  324968 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1017 19:33:01.422873  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422914  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422951  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422967  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:33:01.423035  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:33:01.423092  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.423496  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:33:01.423560  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.460787  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.468755  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.901807  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:33:02.054376  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:33:02.054456  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:33:02.063698  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:33:02.063723  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:33:02.063757  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:33:02.063816  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:33:02.083121  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:33:02.099886  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:33:02.099962  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:33:02.129631  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:33:02.146247  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:33:02.487383  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:33:02.778663  324968 docker.go:234] disabling docker service ...
	I1017 19:33:02.778765  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:33:02.797150  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:33:02.816103  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:33:03.072265  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:33:03.311051  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:33:03.337034  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:33:03.367080  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:33:03.367228  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.379211  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:33:03.379292  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.403390  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.417512  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.434353  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:33:03.450504  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.465403  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.497155  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.516048  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:33:03.527113  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:33:03.546234  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:03.821017  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:33:05.091469  324968 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.270414549s)
	I1017 19:33:05.091496  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:33:05.091552  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:33:05.096822  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:33:05.096899  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:33:05.102601  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:33:05.133868  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:33:05.133956  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:05.169578  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:05.203999  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:33:05.206796  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:33:05.209777  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 19:33:05.212751  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:33:05.237841  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:33:05.242830  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:05.255230  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:33:05.255472  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:05.255718  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:33:05.273658  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:33:05.273934  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.4
	I1017 19:33:05.273942  324968 certs.go:195] generating shared ca certs ...
	I1017 19:33:05.273956  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:33:05.274063  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:33:05.274105  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:33:05.274111  324968 certs.go:257] generating profile certs ...
	I1017 19:33:05.274183  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:33:05.274262  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.db0a5916
	I1017 19:33:05.274301  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:33:05.274310  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:33:05.274333  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:33:05.274345  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:33:05.274357  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:33:05.274367  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:33:05.274379  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:33:05.274397  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:33:05.274409  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:33:05.274457  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:33:05.274485  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:33:05.274493  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:33:05.274518  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:33:05.274539  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:33:05.274559  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:33:05.274597  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:05.274622  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.274637  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.274648  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.274703  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:33:05.302509  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:33:05.404899  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:33:05.408751  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:33:05.417079  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:33:05.420443  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:33:05.429786  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:33:05.433515  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:33:05.442432  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:33:05.446029  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:33:05.456258  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:33:05.460045  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:33:05.468819  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:33:05.473279  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:33:05.482460  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:33:05.502746  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:33:05.521060  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:33:05.540206  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:33:05.559261  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:33:05.579914  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:33:05.607376  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:33:05.624208  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:33:05.643462  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:33:05.663238  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:33:05.685107  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:33:05.703927  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:33:05.716945  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:33:05.730309  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:33:05.744332  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:33:05.760823  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:33:05.781849  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:33:05.797383  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:33:05.815449  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:33:05.822374  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:33:05.830919  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.835675  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.835801  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.879325  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:33:05.888083  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:33:05.896261  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.900178  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.900239  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.943707  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:33:05.952618  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:33:05.961373  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.964981  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.965094  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:33:06.008396  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:33:06.017978  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:33:06.022220  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:33:06.064442  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:33:06.106411  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:33:06.147611  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:33:06.191689  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:33:06.235810  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:33:06.278610  324968 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1017 19:33:06.278711  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:33:06.278740  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:33:06.278801  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:33:06.292033  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:33:06.292094  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:33:06.292151  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:33:06.300562  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:33:06.300652  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:33:06.314364  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:33:06.329602  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:33:06.360017  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:33:06.379948  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:33:06.383943  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:06.395455  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:06.558780  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:06.573849  324968 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:33:06.574138  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:06.579819  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:33:06.582763  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:06.726699  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:06.743509  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:33:06.743622  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:33:06.743944  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m03" to be "Ready" ...
	W1017 19:33:08.748353  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:11.248113  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:13.747938  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:16.248008  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:18.248671  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:20.249311  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:22.747279  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:24.747653  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:26.749385  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	I1017 19:33:27.747523  324968 node_ready.go:49] node "ha-254035-m03" is "Ready"
	I1017 19:33:27.747558  324968 node_ready.go:38] duration metric: took 21.003579566s for node "ha-254035-m03" to be "Ready" ...
	I1017 19:33:27.747571  324968 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:33:27.747631  324968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:33:27.766700  324968 api_server.go:72] duration metric: took 21.192473888s to wait for apiserver process to appear ...
	I1017 19:33:27.766729  324968 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:33:27.766753  324968 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:33:27.775571  324968 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:33:27.776498  324968 api_server.go:141] control plane version: v1.34.1
	I1017 19:33:27.776585  324968 api_server.go:131] duration metric: took 9.846294ms to wait for apiserver health ...
	I1017 19:33:27.776595  324968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:33:27.783374  324968 system_pods.go:59] 26 kube-system pods found
	I1017 19:33:27.783414  324968 system_pods.go:61] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running
	I1017 19:33:27.783426  324968 system_pods.go:61] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:33:27.783431  324968 system_pods.go:61] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:33:27.783438  324968 system_pods.go:61] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running
	I1017 19:33:27.783442  324968 system_pods.go:61] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:33:27.783446  324968 system_pods.go:61] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:33:27.783450  324968 system_pods.go:61] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running
	I1017 19:33:27.783455  324968 system_pods.go:61] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:33:27.783464  324968 system_pods.go:61] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running
	I1017 19:33:27.783469  324968 system_pods.go:61] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running
	I1017 19:33:27.783480  324968 system_pods.go:61] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:33:27.783484  324968 system_pods.go:61] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:33:27.783489  324968 system_pods.go:61] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running
	I1017 19:33:27.783500  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running
	I1017 19:33:27.783505  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:33:27.783509  324968 system_pods.go:61] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running
	I1017 19:33:27.783519  324968 system_pods.go:61] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running
	I1017 19:33:27.783524  324968 system_pods.go:61] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:33:27.783528  324968 system_pods.go:61] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:33:27.783532  324968 system_pods.go:61] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running
	I1017 19:33:27.783536  324968 system_pods.go:61] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running
	I1017 19:33:27.783541  324968 system_pods.go:61] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:33:27.783545  324968 system_pods.go:61] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:33:27.783552  324968 system_pods.go:61] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:33:27.783556  324968 system_pods.go:61] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:33:27.783564  324968 system_pods.go:61] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running
	I1017 19:33:27.783569  324968 system_pods.go:74] duration metric: took 6.965509ms to wait for pod list to return data ...
	I1017 19:33:27.783582  324968 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:33:27.788939  324968 default_sa.go:45] found service account: "default"
	I1017 19:33:27.788978  324968 default_sa.go:55] duration metric: took 5.380156ms for default service account to be created ...
	I1017 19:33:27.788989  324968 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:33:27.884397  324968 system_pods.go:86] 26 kube-system pods found
	I1017 19:33:27.884440  324968 system_pods.go:89] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running
	I1017 19:33:27.884450  324968 system_pods.go:89] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:33:27.884456  324968 system_pods.go:89] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:33:27.884462  324968 system_pods.go:89] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running
	I1017 19:33:27.884466  324968 system_pods.go:89] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:33:27.884475  324968 system_pods.go:89] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:33:27.884478  324968 system_pods.go:89] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running
	I1017 19:33:27.884482  324968 system_pods.go:89] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:33:27.884494  324968 system_pods.go:89] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running
	I1017 19:33:27.884505  324968 system_pods.go:89] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running
	I1017 19:33:27.884525  324968 system_pods.go:89] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:33:27.884531  324968 system_pods.go:89] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:33:27.884535  324968 system_pods.go:89] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running
	I1017 19:33:27.884540  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running
	I1017 19:33:27.884545  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:33:27.884559  324968 system_pods.go:89] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running
	I1017 19:33:27.884563  324968 system_pods.go:89] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running
	I1017 19:33:27.884567  324968 system_pods.go:89] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:33:27.884571  324968 system_pods.go:89] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:33:27.884602  324968 system_pods.go:89] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running
	I1017 19:33:27.884606  324968 system_pods.go:89] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running
	I1017 19:33:27.884610  324968 system_pods.go:89] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:33:27.884614  324968 system_pods.go:89] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:33:27.884618  324968 system_pods.go:89] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:33:27.884622  324968 system_pods.go:89] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:33:27.884630  324968 system_pods.go:89] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running
	I1017 19:33:27.884636  324968 system_pods.go:126] duration metric: took 95.641254ms to wait for k8s-apps to be running ...
	I1017 19:33:27.884659  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:33:27.884730  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:33:27.903571  324968 system_svc.go:56] duration metric: took 18.903653ms WaitForService to wait for kubelet
	I1017 19:33:27.903609  324968 kubeadm.go:586] duration metric: took 21.32938831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:33:27.903634  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:33:27.907627  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907667  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907680  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907685  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907689  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907694  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907697  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907701  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907706  324968 node_conditions.go:105] duration metric: took 4.066189ms to run NodePressure ...
	I1017 19:33:27.907719  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:33:27.907751  324968 start.go:255] writing updated cluster config ...
	I1017 19:33:27.911402  324968 out.go:203] 
	I1017 19:33:27.915521  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:27.915649  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:27.918913  324968 out.go:179] * Starting "ha-254035-m04" worker node in "ha-254035" cluster
	I1017 19:33:27.921713  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:33:27.924620  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:33:27.927532  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:33:27.927564  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:33:27.927567  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:33:27.927721  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:33:27.927731  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:33:27.927887  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:27.960833  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:33:27.960852  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:33:27.960865  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:33:27.960889  324968 start.go:360] acquireMachinesLock for ha-254035-m04: {Name:mk584e2cd96462cdaa6d1f2088a137ff40c48733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:33:27.960940  324968 start.go:364] duration metric: took 36.438µs to acquireMachinesLock for "ha-254035-m04"
	I1017 19:33:27.960959  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:33:27.960964  324968 fix.go:54] fixHost starting: m04
	I1017 19:33:27.961255  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:33:27.995390  324968 fix.go:112] recreateIfNeeded on ha-254035-m04: state=Stopped err=<nil>
	W1017 19:33:27.995487  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:33:27.999207  324968 out.go:252] * Restarting existing docker container for "ha-254035-m04" ...
	I1017 19:33:27.999295  324968 cli_runner.go:164] Run: docker start ha-254035-m04
	I1017 19:33:28.394503  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:33:28.421995  324968 kic.go:430] container "ha-254035-m04" state is running.
	I1017 19:33:28.422449  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:28.441865  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:28.442116  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:33:28.442199  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:28.474872  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:28.475264  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:28.475277  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:33:28.476011  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:33:31.633234  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m04
	
	I1017 19:33:31.633323  324968 ubuntu.go:182] provisioning hostname "ha-254035-m04"
	I1017 19:33:31.633415  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:31.653177  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:31.653483  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:31.653500  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m04 && echo "ha-254035-m04" | sudo tee /etc/hostname
	I1017 19:33:31.837574  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m04
	
	I1017 19:33:31.837648  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:31.855639  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:31.855942  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:31.855960  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:33:32.021671  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:33:32.021700  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:33:32.021717  324968 ubuntu.go:190] setting up certificates
	I1017 19:33:32.021728  324968 provision.go:84] configureAuth start
	I1017 19:33:32.021791  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:32.058708  324968 provision.go:143] copyHostCerts
	I1017 19:33:32.058751  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:33:32.058799  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:33:32.058807  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:33:32.058887  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:33:32.058963  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:33:32.058981  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:33:32.058986  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:33:32.059011  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:33:32.059054  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:33:32.059070  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:33:32.059074  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:33:32.059096  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:33:32.059142  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m04 san=[127.0.0.1 192.168.49.5 ha-254035-m04 localhost minikube]
	I1017 19:33:32.315144  324968 provision.go:177] copyRemoteCerts
	I1017 19:33:32.315269  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:33:32.315346  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.336727  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:32.451884  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:33:32.451953  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:33:32.477259  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:33:32.477335  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:33:32.496861  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:33:32.496932  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:33:32.517190  324968 provision.go:87] duration metric: took 495.446144ms to configureAuth
	I1017 19:33:32.517214  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:33:32.517497  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:32.517606  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.538066  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:32.538377  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:32.538397  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:33:32.868308  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:33:32.868331  324968 machine.go:96] duration metric: took 4.426196148s to provisionDockerMachine
	I1017 19:33:32.868343  324968 start.go:293] postStartSetup for "ha-254035-m04" (driver="docker")
	I1017 19:33:32.868353  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:33:32.868430  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:33:32.868488  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.888400  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.003003  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:33:33.008119  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:33:33.008155  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:33:33.008169  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:33:33.008242  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:33:33.008327  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:33:33.008339  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:33:33.008446  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:33:33.018512  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:33.048826  324968 start.go:296] duration metric: took 180.468283ms for postStartSetup
	I1017 19:33:33.048927  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:33:33.048979  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.068864  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.183386  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:33:33.188620  324968 fix.go:56] duration metric: took 5.227645919s for fixHost
	I1017 19:33:33.188649  324968 start.go:83] releasing machines lock for "ha-254035-m04", held for 5.227700884s
	I1017 19:33:33.188718  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:33.212152  324968 out.go:179] * Found network options:
	I1017 19:33:33.215093  324968 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1017 19:33:33.217835  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217871  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217882  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217906  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217916  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217926  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:33:33.217995  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:33:33.218040  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.218316  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:33:33.218377  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.247548  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.256825  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.415645  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:33:33.492514  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:33:33.492637  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:33:33.500683  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:33:33.500716  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:33:33.500752  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:33:33.500801  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:33:33.517445  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:33:33.537937  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:33:33.538053  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:33:33.556447  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:33:33.576435  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:33:33.721164  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:33:33.856018  324968 docker.go:234] disabling docker service ...
	I1017 19:33:33.856163  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:33:33.874251  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:33:33.889153  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:33:34.059244  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:33:34.205588  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:33:34.223596  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:33:34.248335  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:33:34.248449  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.259664  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:33:34.259750  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.274225  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.284260  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.293374  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:33:34.301939  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.313190  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.322270  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.335994  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:33:34.345500  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:33:34.355597  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:34.485902  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:33:34.658593  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:33:34.658711  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:33:34.663315  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:33:34.663396  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:33:34.667245  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:33:34.704265  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:33:34.704411  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:34.738612  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:34.775046  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:33:34.777914  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:33:34.780845  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 19:33:34.783723  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1017 19:33:34.786627  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:33:34.808635  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:33:34.815185  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:34.827225  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:33:34.827480  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:34.827743  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:33:34.847031  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:33:34.847380  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.5
	I1017 19:33:34.847390  324968 certs.go:195] generating shared ca certs ...
	I1017 19:33:34.847415  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:33:34.847641  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:33:34.847708  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:33:34.847720  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:33:34.847749  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:33:34.847765  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:33:34.847775  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:33:34.847869  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:33:34.847922  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:33:34.847932  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:33:34.847959  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:33:34.847999  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:33:34.848045  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:33:34.848123  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:34.848155  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:33:34.848175  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:33:34.848187  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:34.848206  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:33:34.868384  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:33:34.889303  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:33:34.915103  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:33:34.947695  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:33:34.970689  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:33:34.991429  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:33:35.015821  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:33:35.023417  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:33:35.033117  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.038047  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.038163  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.080117  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:33:35.088886  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:33:35.098283  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.103083  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.103169  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.146427  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:33:35.160483  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:33:35.172663  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.177994  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.178116  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.221220  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:33:35.236438  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:33:35.243682  324968 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:33:35.243736  324968 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1017 19:33:35.243840  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:33:35.243919  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:33:35.253526  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:33:35.253625  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1017 19:33:35.262623  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:33:35.276015  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:33:35.290622  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:33:35.294428  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:35.304725  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:35.455305  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:35.471222  324968 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1017 19:33:35.471611  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:35.476720  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:33:35.479857  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:35.599550  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:35.615050  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:33:35.615120  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:33:35.615344  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m04" to be "Ready" ...
	W1017 19:33:37.619036  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	W1017 19:33:39.619924  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	W1017 19:33:42.120954  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	I1017 19:33:42.619614  324968 node_ready.go:49] node "ha-254035-m04" is "Ready"
	I1017 19:33:42.619639  324968 node_ready.go:38] duration metric: took 7.004273155s for node "ha-254035-m04" to be "Ready" ...
	I1017 19:33:42.619652  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:33:42.619704  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:33:42.643671  324968 system_svc.go:56] duration metric: took 24.010635ms WaitForService to wait for kubelet
	I1017 19:33:42.643702  324968 kubeadm.go:586] duration metric: took 7.172435361s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:33:42.643720  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:33:42.658471  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658503  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658515  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658520  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658524  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658528  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658532  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658536  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658541  324968 node_conditions.go:105] duration metric: took 14.815335ms to run NodePressure ...
	I1017 19:33:42.658553  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:33:42.658578  324968 start.go:255] writing updated cluster config ...
	I1017 19:33:42.658896  324968 ssh_runner.go:195] Run: rm -f paused
	I1017 19:33:42.666036  324968 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:33:42.666578  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:33:42.748115  324968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gfklr" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.799614  324968 pod_ready.go:94] pod "coredns-66bc5c9577-gfklr" is "Ready"
	I1017 19:33:42.799652  324968 pod_ready.go:86] duration metric: took 51.505206ms for pod "coredns-66bc5c9577-gfklr" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.799662  324968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wbgc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.845846  324968 pod_ready.go:94] pod "coredns-66bc5c9577-wbgc8" is "Ready"
	I1017 19:33:42.845885  324968 pod_ready.go:86] duration metric: took 46.206115ms for pod "coredns-66bc5c9577-wbgc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.863051  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.871909  324968 pod_ready.go:94] pod "etcd-ha-254035" is "Ready"
	I1017 19:33:42.871935  324968 pod_ready.go:86] duration metric: took 8.855813ms for pod "etcd-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.871945  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.880198  324968 pod_ready.go:94] pod "etcd-ha-254035-m02" is "Ready"
	I1017 19:33:42.880226  324968 pod_ready.go:86] duration metric: took 8.274439ms for pod "etcd-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.880236  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.067322  324968 request.go:683] "Waited before sending request" delay="183.325668ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:43.071041  324968 pod_ready.go:94] pod "etcd-ha-254035-m03" is "Ready"
	I1017 19:33:43.071067  324968 pod_ready.go:86] duration metric: took 190.824595ms for pod "etcd-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.267504  324968 request.go:683] "Waited before sending request" delay="196.34087ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1017 19:33:43.271686  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.468020  324968 request.go:683] "Waited before sending request" delay="196.217403ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035"
	I1017 19:33:43.666979  324968 request.go:683] "Waited before sending request" delay="194.232504ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:43.670115  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035" is "Ready"
	I1017 19:33:43.670144  324968 pod_ready.go:86] duration metric: took 398.430494ms for pod "kube-apiserver-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.670153  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.867552  324968 request.go:683] "Waited before sending request" delay="197.322859ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035-m02"
	I1017 19:33:44.067901  324968 request.go:683] "Waited before sending request" delay="193.273769ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:44.071414  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035-m02" is "Ready"
	I1017 19:33:44.071442  324968 pod_ready.go:86] duration metric: took 401.282299ms for pod "kube-apiserver-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.071453  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.267920  324968 request.go:683] "Waited before sending request" delay="196.393406ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035-m03"
	I1017 19:33:44.467967  324968 request.go:683] "Waited before sending request" delay="196.317182ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:44.472041  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035-m03" is "Ready"
	I1017 19:33:44.472068  324968 pod_ready.go:86] duration metric: took 400.608635ms for pod "kube-apiserver-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.667472  324968 request.go:683] "Waited before sending request" delay="195.295893ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1017 19:33:44.671549  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.868014  324968 request.go:683] "Waited before sending request" delay="196.366601ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035"
	I1017 19:33:45.067086  324968 request.go:683] "Waited before sending request" delay="193.311224ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:45.072221  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035" is "Ready"
	I1017 19:33:45.072250  324968 pod_ready.go:86] duration metric: took 400.67411ms for pod "kube-controller-manager-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.072261  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.267682  324968 request.go:683] "Waited before sending request" delay="195.335416ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035-m02"
	I1017 19:33:45.467614  324968 request.go:683] "Waited before sending request" delay="188.393045ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:45.470975  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035-m02" is "Ready"
	I1017 19:33:45.471007  324968 pod_ready.go:86] duration metric: took 398.736291ms for pod "kube-controller-manager-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.471017  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.667358  324968 request.go:683] "Waited before sending request" delay="196.263104ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035-m03"
	I1017 19:33:45.867478  324968 request.go:683] "Waited before sending request" delay="196.63098ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:45.870372  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035-m03" is "Ready"
	I1017 19:33:45.870427  324968 pod_ready.go:86] duration metric: took 399.402071ms for pod "kube-controller-manager-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.067916  324968 request.go:683] "Waited before sending request" delay="197.353037ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1017 19:33:46.071965  324968 pod_ready.go:83] waiting for pod "kube-proxy-548b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.267426  324968 request.go:683] "Waited before sending request" delay="195.355338ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-548b2"
	I1017 19:33:46.467392  324968 request.go:683] "Waited before sending request" delay="193.351461ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:46.470716  324968 pod_ready.go:94] pod "kube-proxy-548b2" is "Ready"
	I1017 19:33:46.470745  324968 pod_ready.go:86] duration metric: took 398.750601ms for pod "kube-proxy-548b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.470755  324968 pod_ready.go:83] waiting for pod "kube-proxy-b4fr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.667046  324968 request.go:683] "Waited before sending request" delay="196.219848ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b4fr6"
	I1017 19:33:46.867280  324968 request.go:683] "Waited before sending request" delay="196.299896ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:46.870670  324968 pod_ready.go:94] pod "kube-proxy-b4fr6" is "Ready"
	I1017 19:33:46.870707  324968 pod_ready.go:86] duration metric: took 399.946057ms for pod "kube-proxy-b4fr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.870717  324968 pod_ready.go:83] waiting for pod "kube-proxy-fr5ts" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:47.067054  324968 request.go:683] "Waited before sending request" delay="196.240361ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fr5ts"
	I1017 19:33:47.267565  324968 request.go:683] "Waited before sending request" delay="196.190762ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:47.467316  324968 request.go:683] "Waited before sending request" delay="96.206992ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fr5ts"
	I1017 19:33:47.667564  324968 request.go:683] "Waited before sending request" delay="186.261475ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:48.067382  324968 request.go:683] "Waited before sending request" delay="186.267596ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:48.467049  324968 request.go:683] "Waited before sending request" delay="92.145258ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	W1017 19:33:48.877689  324968 pod_ready.go:104] pod "kube-proxy-fr5ts" is not "Ready", error: <nil>
	W1017 19:33:50.877808  324968 pod_ready.go:104] pod "kube-proxy-fr5ts" is not "Ready", error: <nil>
	I1017 19:33:52.377837  324968 pod_ready.go:94] pod "kube-proxy-fr5ts" is "Ready"
	I1017 19:33:52.377866  324968 pod_ready.go:86] duration metric: took 5.507143006s for pod "kube-proxy-fr5ts" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.377876  324968 pod_ready.go:83] waiting for pod "kube-proxy-k56cv" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.386625  324968 pod_ready.go:94] pod "kube-proxy-k56cv" is "Ready"
	I1017 19:33:52.386655  324968 pod_ready.go:86] duration metric: took 8.770737ms for pod "kube-proxy-k56cv" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.390245  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.467536  324968 request.go:683] "Waited before sending request" delay="77.200252ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035"
	I1017 19:33:52.667089  324968 request.go:683] "Waited before sending request" delay="193.299146ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:52.670454  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035" is "Ready"
	I1017 19:33:52.670484  324968 pod_ready.go:86] duration metric: took 280.216212ms for pod "kube-scheduler-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.670495  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.867921  324968 request.go:683] "Waited before sending request" delay="197.327438ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035-m02"
	I1017 19:33:53.067947  324968 request.go:683] "Waited before sending request" delay="195.176914ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:53.072896  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035-m02" is "Ready"
	I1017 19:33:53.072972  324968 pod_ready.go:86] duration metric: took 402.46965ms for pod "kube-scheduler-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.072997  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.267273  324968 request.go:683] "Waited before sending request" delay="194.142538ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035-m03"
	I1017 19:33:53.467118  324968 request.go:683] "Waited before sending request" delay="196.200739ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:53.470125  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035-m03" is "Ready"
	I1017 19:33:53.470152  324968 pod_ready.go:86] duration metric: took 397.132807ms for pod "kube-scheduler-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.470163  324968 pod_ready.go:40] duration metric: took 10.804092337s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:33:53.525625  324968 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 19:33:53.530847  324968 out.go:179] * Done! kubectl is now configured to use "ha-254035" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:33:01 ha-254035 crio[667]: time="2025-10-17T19:33:01.657638061Z" level=info msg="Started container" PID=1327 containerID=e9ece41337b80cfabb4196dc2d55dc644a949f49cd22450cf623b7f5257d5d69 description=kube-system/kindnet-gzzsg/kindnet-cni id=1467213a-df01-47f7-91a8-c9ecfa2692be name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe908ac1b77150ea99b48733349b105097380b5cd2e2f243156591744040d978
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.209485703Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.212893465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.212927827Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.21295117Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216661947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216697064Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216721523Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220161292Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220191347Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220215756Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.223221953Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.223254084Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:27 ha-254035 conmon[1135]: conmon 0cc2287088bc871e7f4d <ninfo>: container 1139 exited with status 1
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.068588792Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b7b509f3-b012-49ed-9e6d-e0ab750c4b6b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.07344856Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25fe3696-e90b-4a83-a3ad-33aa6af72f3d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.077367011Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=28e7f811-dec4-4fcb-9722-3a341888b632 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.077693042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.096972398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.097208428Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/17cd3234a8a982607354e16eb6b88983eecf7edea137eb96fbc8cd597e6577e2/merged/etc/passwd: no such file or directory"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.09724453Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/17cd3234a8a982607354e16eb6b88983eecf7edea137eb96fbc8cd597e6577e2/merged/etc/group: no such file or directory"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.108385903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.143116992Z" level=info msg="Created container f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e: kube-system/storage-provisioner/storage-provisioner" id=28e7f811-dec4-4fcb-9722-3a341888b632 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.144104625Z" level=info msg="Starting container: f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e" id=e482d8e9-fc6c-4e49-a1a6-8af83382da5d name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.153409034Z" level=info msg="Started container" PID=1450 containerID=f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e description=kube-system/storage-provisioner/storage-provisioner id=e482d8e9-fc6c-4e49-a1a6-8af83382da5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	f03a6dda4443a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   31 seconds ago       Running             storage-provisioner       4                   ebb6a1f53c483       storage-provisioner                 kube-system
	e9ece41337b80       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   58 seconds ago       Running             kindnet-cni               2                   fe908ac1b7715       kindnet-gzzsg                       kube-system
	83532ba0435f2       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   59 seconds ago       Running             busybox                   2                   0240e4c18c32a       busybox-7b57f96db7-nc6x2            default
	db8d02bae2fa1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Running             coredns                   2                   507d7b819debe       coredns-66bc5c9577-wbgc8            kube-system
	706bee2267664       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Running             coredns                   2                   c6367bcfd35d4       coredns-66bc5c9577-gfklr            kube-system
	d51ad27d42179       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Running             kube-proxy                2                   7bb73f9365e64       kube-proxy-548b2                    kube-system
	0cc2287088bc8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       3                   ebb6a1f53c483       storage-provisioner                 kube-system
	cd9dec0514b24       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Running             kube-controller-manager   7                   251b6be3c0c4f       kube-controller-manager-ha-254035   kube-system
	d713edbb381bb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   6                   251b6be3c0c4f       kube-controller-manager-ha-254035   kube-system
	fb534fcdb2d89       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Running             kube-apiserver            3                   0fd33e0b5d3e5       kube-apiserver-ha-254035            kube-system
	ab6180a80f68d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Running             etcd                      2                   bc1edea2f668b       etcd-ha-254035                      kube-system
	c4609fc3fd1c0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Running             kube-scheduler            2                   32d4263a101a2       kube-scheduler-ha-254035            kube-system
	0652fd27f5bff       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   About a minute ago   Running             kube-vip                  1                   31afc78057fe9       kube-vip-ha-254035                  kube-system
	
	
	==> coredns [706bee22676646b717cd807f92b3341bc3bee9a22195d1a96f63858b9fe3f381] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35042 - 59078 "HINFO IN 7580743585985535806.8578026735020374478. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014332173s
	
	
	==> coredns [db8d02bae2fa1a6f368ea962e35a1111cb4230bcadf4709cf7545ace2d4272d6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35443 - 54421 "HINFO IN 8550404136984308969.4709042246801981974. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015029672s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-254035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_17_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:33:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-254035
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                eadb5c5f-dcbb-485c-aea7-3aa5b951fd9e
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-nc6x2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-gfklr             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 coredns-66bc5c9577-wbgc8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-ha-254035                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-gzzsg                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-254035             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-254035    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-548b2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-254035             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-254035                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 62s                  kube-proxy       
	  Normal   Starting                 8m1s                 kube-proxy       
	  Normal   Starting                 15m                  kube-proxy       
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                  kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                  kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m                  kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           15m                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           15m                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeReady                15m                  kubelet          Node ha-254035 status is now: NodeReady
	  Normal   RegisteredNode           14m                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)    kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m29s                node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s (x8 over 107s)  kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           69s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           68s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           32s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	
	
	Name:               ha-254035-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_18_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:18:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:33:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-254035-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6c5e97e0-fa27-407d-a976-b646e8a40ca5
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6xjlp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-254035-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-vss98                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-254035-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-254035-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-b4fr6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-254035-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-254035-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 15m                  kube-proxy       
	  Normal   Starting                 42s                  kube-proxy       
	  Normal   Starting                 10m                  kube-proxy       
	  Normal   RegisteredNode           15m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           15m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Warning  CgroupV1                 11m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)    kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             10m                  node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           10m                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           7m29s                node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   NodeNotReady             6m39s                node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   Starting                 104s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 104s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     104s (x8 over 104s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           69s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           68s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           32s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	
	
	Name:               ha-254035-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_20_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:19:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:33:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:33:27 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:33:27 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:33:27 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:33:27 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-254035-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2f343c58-0cc9-444a-bc88-7799c3ff52df
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-979zm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-254035-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-2k9kj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-254035-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-254035-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-k56cv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-254035-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-254035-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17s                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   RegisteredNode           14m                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           7m29s              node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   NodeNotReady             6m39s              node-controller  Node ha-254035-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           69s                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           68s                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node ha-254035-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node ha-254035-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node ha-254035-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	
	
	Name:               ha-254035-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_21_16_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:21:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:33:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-254035-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                12691412-a8b5-426e-846e-d6161e527ea6
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pwhwv       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-proxy-fr5ts    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 9s                 kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeReady                12m                kubelet          Node ha-254035-m04 status is now: NodeReady
	  Normal   RegisteredNode           10m                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           7m29s              node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeNotReady             6m39s              node-controller  Node ha-254035-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           69s                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           68s                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           32s                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  28s (x8 over 31s)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    28s (x8 over 31s)  kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     28s (x8 over 31s)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +5.779853] overlayfs: idmapped layers are currently not supported
	[Oct17 18:34] overlayfs: idmapped layers are currently not supported
	[Oct17 18:35] overlayfs: idmapped layers are currently not supported
	[Oct17 18:36] overlayfs: idmapped layers are currently not supported
	[ +20.850590] overlayfs: idmapped layers are currently not supported
	[Oct17 18:38] overlayfs: idmapped layers are currently not supported
	[ +19.812679] overlayfs: idmapped layers are currently not supported
	[Oct17 18:39] overlayfs: idmapped layers are currently not supported
	[ +19.225178] overlayfs: idmapped layers are currently not supported
	[Oct17 18:40] overlayfs: idmapped layers are currently not supported
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 18:57] overlayfs: idmapped layers are currently not supported
	[Oct17 19:03] overlayfs: idmapped layers are currently not supported
	[Oct17 19:04] overlayfs: idmapped layers are currently not supported
	[Oct17 19:17] overlayfs: idmapped layers are currently not supported
	[Oct17 19:18] overlayfs: idmapped layers are currently not supported
	[Oct17 19:19] overlayfs: idmapped layers are currently not supported
	[Oct17 19:21] overlayfs: idmapped layers are currently not supported
	[Oct17 19:22] overlayfs: idmapped layers are currently not supported
	[Oct17 19:23] overlayfs: idmapped layers are currently not supported
	[  +4.119232] overlayfs: idmapped layers are currently not supported
	[Oct17 19:32] overlayfs: idmapped layers are currently not supported
	[  +2.727676] overlayfs: idmapped layers are currently not supported
	[ +41.644994] overlayfs: idmapped layers are currently not supported
	[Oct17 19:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab6180a80f68dcb65397cf72c97a3f14b4b536aa865a3b252a4a6ebf62d58b59] <==
	{"level":"info","ts":"2025-10-17T19:33:02.869380Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:02.912576Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"51e6bdeadc5ac63a","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-17T19:33:02.912744Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:03.092004Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:03.094998Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"warn","ts":"2025-10-17T19:33:03.846962Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:33:03.848354Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:33:03.904596Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"51e6bdeadc5ac63a","error":"failed to dial 51e6bdeadc5ac63a on stream MsgApp v2 (EOF)"}
	{"level":"warn","ts":"2025-10-17T19:33:04.073057Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"warn","ts":"2025-10-17T19:33:05.634743Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T19:33:05.634793Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T19:33:08.019198Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"warn","ts":"2025-10-17T19:33:09.636609Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T19:33:09.636666Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T19:33:13.638319Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T19:33:13.638379Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"51e6bdeadc5ac63a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-10-17T19:33:15.389351Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"51e6bdeadc5ac63a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-17T19:33:15.389402Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:15.389416Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:15.389726Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"51e6bdeadc5ac63a","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-17T19:33:15.389754Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:15.432207Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"info","ts":"2025-10-17T19:33:15.432664Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"51e6bdeadc5ac63a"}
	{"level":"warn","ts":"2025-10-17T19:33:56.466968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"215.192801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:367635"}
	{"level":"info","ts":"2025-10-17T19:33:56.467049Z","caller":"traceutil/trace.go:172","msg":"trace[1189122698] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:3372; }","duration":"215.291612ms","start":"2025-10-17T19:33:56.251745Z","end":"2025-10-17T19:33:56.467036Z","steps":["trace[1189122698] 'range keys from bolt db'  (duration: 214.220961ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:34:00 up  2:16,  0 user,  load average: 4.82, 2.56, 1.74
	Linux ha-254035 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9ece41337b80cfabb4196dc2d55dc644a949f49cd22450cf623b7f5257d5d69] <==
	I1017 19:33:22.208940       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:33:32.207686       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:33:32.207737       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:33:32.207909       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:33:32.207918       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:33:32.208237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:33:32.208272       1 main.go:301] handling current node
	I1017 19:33:32.208285       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:33:32.208290       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:33:42.232363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:33:42.232440       1 main.go:301] handling current node
	I1017 19:33:42.232462       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:33:42.232470       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:33:42.232739       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:33:42.232776       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:33:42.232873       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:33:42.232890       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:33:52.206912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:33:52.206964       1 main.go:301] handling current node
	I1017 19:33:52.206980       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:33:52.206986       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:33:52.207125       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:33:52.207150       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:33:52.207205       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:33:52.207215       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fb534fcdb2d895a4c9c908d2c41c5a3a49e1ba7a9a8c54cca3e0f68236d86194] <==
	{"level":"warn","ts":"2025-10-17T19:32:45.556106Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001deba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-17T19:32:45.556124Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028872c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1017 19:32:45.742745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:32:45.761612       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:32:45.766614       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 19:32:45.766727       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:32:45.766874       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 19:32:45.766889       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:32:45.772156       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:32:45.782338       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:32:45.782660       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:32:45.782735       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:32:45.786264       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:32:45.801116       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:32:45.801154       1 policy_source.go:240] refreshing policies
	I1017 19:32:45.801215       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:32:45.801340       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:32:45.823912       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1017 19:32:45.892067       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:32:46.104708       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:32:51.664034       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:32:51.782010       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:32:51.908184       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:32:52.058599       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:32:52.107924       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [cd9dec0514b2422e9e0e06a464213e0f38cdfce11c6ca20c97c479d028fcac71] <==
	I1017 19:32:51.689156       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 19:32:51.696612       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 19:32:51.700277       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 19:32:51.702304       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:32:51.702337       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:32:51.702705       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:32:51.703169       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 19:32:51.704899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:32:51.705461       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:32:51.705774       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:32:51.705860       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:32:51.707308       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:32:51.708143       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:32:51.708196       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:32:51.713230       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:32:51.722295       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:32:51.793811       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m04"
	I1017 19:32:51.793885       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035"
	I1017 19:32:51.793911       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m02"
	I1017 19:32:51.793948       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m03"
	I1017 19:32:51.794411       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1017 19:32:56.794689       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 19:33:32.102831       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-m4bp9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-m4bp9\": the object has been modified; please apply your changes to the latest version and try again"
	I1017 19:33:32.116286       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9bc45666-7349-43f1-b1bc-8fe50797293b", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-m4bp9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-m4bp9": the object has been modified; please apply your changes to the latest version and try again
	I1017 19:33:42.572582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	
	
	==> kube-controller-manager [d713edbb381bb7ac4baa67d925ebd85ec5ab61fa9319db2f03ba47d667e26940] <==
	I1017 19:32:15.577934       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:32:17.585378       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 19:32:17.585478       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:32:17.587388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 19:32:17.588088       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 19:32:17.588254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:32:17.588373       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1017 19:32:32.131519       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [d51ad27d42179adee09ff705d12ad5d15a734809e4732ad3eb1c4429dc7021e6] <==
	I1017 19:32:57.743934       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:32:57.902619       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:32:57.934204       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:32:57.934232       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:32:57.934302       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:32:58.002595       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:32:58.002661       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:32:58.008742       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:32:58.009306       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:32:58.009381       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:32:58.011974       1 config.go:200] "Starting service config controller"
	I1017 19:32:58.011999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:32:58.021529       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:32:58.021612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:32:58.021667       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:32:58.021695       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:32:58.021970       1 config.go:309] "Starting node config controller"
	I1017 19:32:58.021993       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:32:58.112358       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:32:58.122792       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:32:58.122780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:32:58.122830       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [c4609fc3fd1c0d5440395e0986380eb9eb076a0e1e1faa4ad132e67cd913032d] <==
	E1017 19:32:31.771659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:32:31.797116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:32:31.896832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:32:32.064844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:32:32.932569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:32:37.169100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:32:37.846495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:32:38.099427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 19:32:38.270033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:32:38.487027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:32:38.599190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:32:38.651417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 19:32:38.767857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:32:39.359080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:32:39.794118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:32:40.174663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:32:40.365511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:32:41.236604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:32:41.734978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:32:41.750769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:32:41.960587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:32:42.287351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:32:42.388652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:32:42.941963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1017 19:33:04.097110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.424411     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.424463     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.425112     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238 WatchSource:0}: Error finding container ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238: Status 404 returned error can't find the container with id ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.428343     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(4784cc20-6df7-4e32-bbfa-e0b3be4a1e83): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.428384     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="4784cc20-6df7-4e32-bbfa-e0b3be4a1e83"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.433597     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6 WatchSource:0}: Error finding container 507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6: Status 404 returned error can't find the container with id 507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.441352     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.441397     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.442165     802 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-254035\" already exists" pod="kube-system/kube-scheduler-ha-254035"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.458234     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673 WatchSource:0}: Error finding container 0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673: Status 404 returned error can't find the container with id 0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.468716     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-nc6x2_default(4ced2553-3c5f-4d67-ad3c-2ed34ab319ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.468759     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-nc6x2" podUID="4ced2553-3c5f-4d67-ad3c-2ed34ab319ef"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.722833     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-nc6x2_default(4ced2553-3c5f-4d67-ad3c-2ed34ab319ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.741101     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-nc6x2" podUID="4ced2553-3c5f-4d67-ad3c-2ed34ab319ef"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.749534     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-gfklr_kube-system(8bf2b43b-91c9-4531-a571-36060412860e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755626     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-gfklr" podUID="8bf2b43b-91c9-4531-a571-36060412860e"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755218     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(4784cc20-6df7-4e32-bbfa-e0b3be4a1e83): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755307     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755390     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-548b2_kube-system(4b772887-90df-4871-9343-69349bdda859): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755118     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757120     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757234     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757252     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="4784cc20-6df7-4e32-bbfa-e0b3be4a1e83"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757271     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-548b2" podUID="4b772887-90df-4871-9343-69349bdda859"
	Oct 17 19:33:28 ha-254035 kubelet[802]: I1017 19:33:28.066788     802 scope.go:117] "RemoveContainer" containerID="0cc2287088bc871e7f4dd5ef5a425a95862343c93ae9b170eadd77d685735b39"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-254035 -n ha-254035
helpers_test.go:269: (dbg) Run:  kubectl --context ha-254035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (90.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 node add --control-plane --alsologtostderr -v 5
E1017 19:34:36.132709  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 node add --control-plane --alsologtostderr -v 5: (1m26.357547981s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5: (1.362968862s)
ha_test.go:618: status says not all three control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-254035-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:621: status says not all four hosts are running: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-254035-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:624: status says not all four kubelets are running: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-254035-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:627: status says not all three apiservers are running: args "out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5": ha-254035
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-254035-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-254035-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-254035
helpers_test.go:243: (dbg) docker inspect ha-254035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	        "Created": "2025-10-17T19:17:36.603472481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 325091,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:32:05.992149801Z",
	            "FinishedAt": "2025-10-17T19:32:05.172940124Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hosts",
	        "LogPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8-json.log",
	        "Name": "/ha-254035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-254035:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-254035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	                "LowerDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-254035",
	                "Source": "/var/lib/docker/volumes/ha-254035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-254035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-254035",
	                "name.minikube.sigs.k8s.io": "ha-254035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1b39170e4096374d7e684a87814d212baad95e741e4cc807dce61f43c877747",
	            "SandboxKey": "/var/run/docker/netns/b1b39170e409",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-254035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:e2:15:6d:bc:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f667d9c3ea201faa6573d33bffc4907012785051d424eb86a31b1e09eb8b135",
	                    "EndpointID": "e9462a0e2e3d7837432ea03485390bfaae7ae9afbbbbc20020bc0ae2782b8ba7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-254035",
	                        "7f770318d5dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-254035 -n ha-254035
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 logs -n 25: (1.763104748s)
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp testdata/cp-test.txt ha-254035-m04:/home/docker/cp-test.txt                                                             │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035-m04.txt │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035:/home/docker/cp-test_ha-254035-m04_ha-254035.txt                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035.txt                                                 │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m02 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m03:/home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node start m02 --alsologtostderr -v 5                                                                                      │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:23 UTC │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │ 17 Oct 25 19:23 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5                                                                                   │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	│ node    │ ha-254035 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │ 17 Oct 25 19:32 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:32 UTC │ 17 Oct 25 19:33 UTC │
	│ node    │ ha-254035 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:35 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:32:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:32:05.731928  324968 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:32:05.732103  324968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:32:05.732132  324968 out.go:374] Setting ErrFile to fd 2...
	I1017 19:32:05.732151  324968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:32:05.732432  324968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:32:05.732853  324968 out.go:368] Setting JSON to false
	I1017 19:32:05.733704  324968 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8077,"bootTime":1760721449,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:32:05.733797  324968 start.go:141] virtualization:  
	I1017 19:32:05.736996  324968 out.go:179] * [ha-254035] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:32:05.740976  324968 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:32:05.741039  324968 notify.go:220] Checking for updates...
	I1017 19:32:05.746791  324968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:32:05.749627  324968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:05.752435  324968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:32:05.755486  324968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:32:05.758645  324968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:32:05.762073  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:05.762786  324968 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:32:05.783133  324968 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:32:05.783261  324968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:32:05.840860  324968 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:32:05.83134404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:32:05.840970  324968 docker.go:318] overlay module found
	I1017 19:32:05.844001  324968 out.go:179] * Using the docker driver based on existing profile
	I1017 19:32:05.846818  324968 start.go:305] selected driver: docker
	I1017 19:32:05.846835  324968 start.go:925] validating driver "docker" against &{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:05.846996  324968 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:32:05.847094  324968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:32:05.907256  324968 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:32:05.898245791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:32:05.907667  324968 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:32:05.907704  324968 cni.go:84] Creating CNI manager for ""
	I1017 19:32:05.907768  324968 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:32:05.907825  324968 start.go:349] cluster config:
	{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:05.911004  324968 out.go:179] * Starting "ha-254035" primary control-plane node in "ha-254035" cluster
	I1017 19:32:05.913729  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:05.916410  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:05.919155  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:05.919202  324968 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 19:32:05.919216  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:05.919268  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:05.919311  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:05.919321  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:05.919466  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:05.938132  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:05.938154  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:05.938173  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:05.938195  324968 start.go:360] acquireMachinesLock for ha-254035: {Name:mka2e39989b9cf6078778e7f6519885462ea711f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:05.938260  324968 start.go:364] duration metric: took 36.741µs to acquireMachinesLock for "ha-254035"
	I1017 19:32:05.938292  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:05.938311  324968 fix.go:54] fixHost starting: 
	I1017 19:32:05.938563  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:05.955500  324968 fix.go:112] recreateIfNeeded on ha-254035: state=Stopped err=<nil>
	W1017 19:32:05.955532  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:05.958901  324968 out.go:252] * Restarting existing docker container for "ha-254035" ...
	I1017 19:32:05.958986  324968 cli_runner.go:164] Run: docker start ha-254035
	I1017 19:32:06.223945  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:06.246991  324968 kic.go:430] container "ha-254035" state is running.
	I1017 19:32:06.247441  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:06.267236  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:06.267478  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:06.267538  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:06.286531  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:06.287650  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:06.287670  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:06.288401  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:09.440064  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:32:09.440099  324968 ubuntu.go:182] provisioning hostname "ha-254035"
	I1017 19:32:09.440162  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:09.457351  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:09.457659  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:09.457674  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035 && echo "ha-254035" | sudo tee /etc/hostname
	I1017 19:32:09.613626  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:32:09.613711  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:09.630718  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:09.631029  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:09.631045  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:09.780773  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:09.780802  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:09.780820  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:09.780831  324968 provision.go:84] configureAuth start
	I1017 19:32:09.780894  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:09.801074  324968 provision.go:143] copyHostCerts
	I1017 19:32:09.801116  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:09.801147  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:09.801165  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:09.801244  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:09.801333  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:09.801350  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:09.801354  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:09.801381  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:09.801427  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:09.801450  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:09.801455  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:09.801479  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:09.801528  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035 san=[127.0.0.1 192.168.49.2 ha-254035 localhost minikube]
	I1017 19:32:10.886077  324968 provision.go:177] copyRemoteCerts
	I1017 19:32:10.886156  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:32:10.886202  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:10.904681  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.010120  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:32:11.010211  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:32:11.028108  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:32:11.028165  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1017 19:32:11.044982  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:32:11.045040  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:32:11.061816  324968 provision.go:87] duration metric: took 1.280961553s to configureAuth
	I1017 19:32:11.061844  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:32:11.062085  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:11.062193  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.080891  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:11.081208  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:11.081230  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:32:11.407184  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:32:11.407205  324968 machine.go:96] duration metric: took 5.139717317s to provisionDockerMachine
	I1017 19:32:11.407216  324968 start.go:293] postStartSetup for "ha-254035" (driver="docker")
	I1017 19:32:11.407226  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:32:11.407298  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:32:11.407335  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.427760  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.532299  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:32:11.535879  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:32:11.535910  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:32:11.535921  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:32:11.535995  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:32:11.536114  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:32:11.536128  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:32:11.536253  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:32:11.544245  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:11.561441  324968 start.go:296] duration metric: took 154.210245ms for postStartSetup
	I1017 19:32:11.561521  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:32:11.561565  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.578819  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.677440  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:32:11.681988  324968 fix.go:56] duration metric: took 5.74367054s for fixHost
	I1017 19:32:11.682016  324968 start.go:83] releasing machines lock for "ha-254035", held for 5.743742202s
	I1017 19:32:11.682098  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:11.699528  324968 ssh_runner.go:195] Run: cat /version.json
	I1017 19:32:11.699564  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:32:11.699581  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.699635  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.717585  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.718770  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.820235  324968 ssh_runner.go:195] Run: systemctl --version
	I1017 19:32:11.912550  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:32:11.950130  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:32:11.954364  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:32:11.954441  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:32:11.961885  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:32:11.961962  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:32:11.962000  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:32:11.962067  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:32:11.977362  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:32:11.990093  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:32:11.990161  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:32:12.005596  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:32:12.028034  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:32:12.152900  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:32:12.266767  324968 docker.go:234] disabling docker service ...
	I1017 19:32:12.266872  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:32:12.281703  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:32:12.294628  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:32:12.407632  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:32:12.520465  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:32:12.533571  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:32:12.547072  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:32:12.547164  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.555749  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:32:12.555816  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.564895  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.574036  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.582944  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:32:12.591372  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.600416  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.609166  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.618096  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:32:12.625617  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:32:12.633309  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:12.745158  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:32:12.879102  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:32:12.879171  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:32:12.883018  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:32:12.883079  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:32:12.886642  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:32:12.910860  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:32:12.910959  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:12.937450  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:12.969308  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:32:12.971996  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:32:12.987690  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:32:12.991595  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:13.001105  324968 kubeadm.go:883] updating cluster {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:32:13.001261  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:13.001318  324968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:32:13.038776  324968 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:32:13.038803  324968 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:32:13.038896  324968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:32:13.068706  324968 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:32:13.068731  324968 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:32:13.068740  324968 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:32:13.068844  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:32:13.068920  324968 ssh_runner.go:195] Run: crio config
	I1017 19:32:13.128454  324968 cni.go:84] Creating CNI manager for ""
	I1017 19:32:13.128483  324968 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:32:13.128514  324968 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:32:13.128575  324968 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-254035 NodeName:ha-254035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:32:13.128708  324968 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-254035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:32:13.128729  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:32:13.128779  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:32:13.140710  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:13.140824  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:32:13.140891  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:32:13.148269  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:32:13.148357  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 19:32:13.156108  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 19:32:13.168572  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:32:13.181432  324968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 19:32:13.193977  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:32:13.207012  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:32:13.210795  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:13.220459  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:13.334243  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:13.350459  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.2
	I1017 19:32:13.350480  324968 certs.go:195] generating shared ca certs ...
	I1017 19:32:13.350496  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:13.350630  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:32:13.350673  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:32:13.350681  324968 certs.go:257] generating profile certs ...
	I1017 19:32:13.350760  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:32:13.350837  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea
	I1017 19:32:13.350876  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:32:13.350885  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:32:13.350898  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:32:13.350908  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:32:13.350918  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:32:13.350928  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:32:13.350941  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:32:13.350951  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:32:13.350962  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:32:13.351012  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:32:13.351041  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:32:13.351048  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:32:13.351070  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:32:13.351095  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:32:13.351117  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:32:13.351161  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:13.351191  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.351207  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.351219  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.351856  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:32:13.375776  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:32:13.394623  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:32:13.413878  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:32:13.434296  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:32:13.456687  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:32:13.484245  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:32:13.505393  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:32:13.528512  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:32:13.550651  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:32:13.581215  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:32:13.601377  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:32:13.617352  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:32:13.624146  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:32:13.633165  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.637212  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.637279  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.680086  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:32:13.689010  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:32:13.698044  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.701888  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.701957  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.744236  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:32:13.752213  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:32:13.760295  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.764256  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.764320  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.806422  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:32:13.814023  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:32:13.817664  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:32:13.858251  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:32:13.899329  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:32:13.940348  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:32:13.981700  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:32:14.022967  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:32:14.071872  324968 kubeadm.go:400] StartCluster: {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:14.072073  324968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:32:14.072171  324968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:32:14.159623  324968 cri.go:89] found id: "0652fd27f5bff0f3d194b39abbb92602f049204bb45577d9a403537b5949c8cc"
	I1017 19:32:14.159695  324968 cri.go:89] found id: ""
	I1017 19:32:14.159788  324968 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:32:14.178262  324968 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:32:14Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:32:14.178424  324968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:32:14.193618  324968 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:32:14.193677  324968 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:32:14.193771  324968 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:32:14.214880  324968 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:14.215386  324968 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-254035" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:14.215555  324968 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-254035" cluster setting kubeconfig missing "ha-254035" context setting]
	I1017 19:32:14.215920  324968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.216577  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:32:14.217294  324968 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 19:32:14.217346  324968 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 19:32:14.217362  324968 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 19:32:14.217367  324968 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 19:32:14.217427  324968 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 19:32:14.217452  324968 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 19:32:14.217940  324968 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:32:14.232358  324968 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 19:32:14.232432  324968 kubeadm.go:601] duration metric: took 38.716713ms to restartPrimaryControlPlane
	I1017 19:32:14.232455  324968 kubeadm.go:402] duration metric: took 160.594092ms to StartCluster
	I1017 19:32:14.232498  324968 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.232662  324968 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:14.233403  324968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.233677  324968 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:32:14.233733  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:32:14.233763  324968 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:32:14.234454  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:14.239733  324968 out.go:179] * Enabled addons: 
	I1017 19:32:14.243909  324968 addons.go:514] duration metric: took 10.136788ms for enable addons: enabled=[]
	I1017 19:32:14.243996  324968 start.go:246] waiting for cluster config update ...
	I1017 19:32:14.244021  324968 start.go:255] writing updated cluster config ...
	I1017 19:32:14.247787  324968 out.go:203] 
	I1017 19:32:14.251318  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:14.251508  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.254862  324968 out.go:179] * Starting "ha-254035-m02" control-plane node in "ha-254035" cluster
	I1017 19:32:14.258139  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:14.261425  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:14.264451  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:14.264576  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:14.264510  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:14.264972  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:14.265018  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:14.265234  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.286925  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:14.286943  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:14.286955  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:14.286977  324968 start.go:360] acquireMachinesLock for ha-254035-m02: {Name:mkcf59557cfb2c18712510006a9b88f53e9d8916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:14.287029  324968 start.go:364] duration metric: took 36.003µs to acquireMachinesLock for "ha-254035-m02"
	I1017 19:32:14.287048  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:14.287054  324968 fix.go:54] fixHost starting: m02
	I1017 19:32:14.287335  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:32:14.308380  324968 fix.go:112] recreateIfNeeded on ha-254035-m02: state=Stopped err=<nil>
	W1017 19:32:14.308406  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:14.312007  324968 out.go:252] * Restarting existing docker container for "ha-254035-m02" ...
	I1017 19:32:14.312096  324968 cli_runner.go:164] Run: docker start ha-254035-m02
	I1017 19:32:14.710881  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:32:14.738971  324968 kic.go:430] container "ha-254035-m02" state is running.
	I1017 19:32:14.739337  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:14.764764  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.765007  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:14.765074  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:14.794957  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:14.795271  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:14.795287  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:14.795888  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:17.992435  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:32:17.992457  324968 ubuntu.go:182] provisioning hostname "ha-254035-m02"
	I1017 19:32:17.992541  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:18.030394  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:18.030717  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:18.030730  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m02 && echo "ha-254035-m02" | sudo tee /etc/hostname
	I1017 19:32:18.238178  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:32:18.238358  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:18.269009  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:18.269312  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:18.269330  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:18.453189  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:18.453217  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:18.453238  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:18.453248  324968 provision.go:84] configureAuth start
	I1017 19:32:18.453312  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:18.494134  324968 provision.go:143] copyHostCerts
	I1017 19:32:18.494179  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:18.494213  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:18.494225  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:18.494315  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:18.494442  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:18.494469  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:18.494479  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:18.494510  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:18.494560  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:18.494584  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:18.494592  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:18.494620  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:18.494675  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m02 san=[127.0.0.1 192.168.49.3 ha-254035-m02 localhost minikube]
	I1017 19:32:19.339690  324968 provision.go:177] copyRemoteCerts
	I1017 19:32:19.339761  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:32:19.339805  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:19.360710  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:19.488967  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:32:19.489032  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:32:19.531594  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:32:19.531655  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:32:19.572626  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:32:19.572693  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:32:19.617410  324968 provision.go:87] duration metric: took 1.16414737s to configureAuth
	I1017 19:32:19.617479  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:32:19.617739  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:19.617872  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:19.658286  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:19.658598  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:19.658613  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:32:20.717397  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:32:20.717469  324968 machine.go:96] duration metric: took 5.952443469s to provisionDockerMachine
	I1017 19:32:20.717493  324968 start.go:293] postStartSetup for "ha-254035-m02" (driver="docker")
	I1017 19:32:20.717527  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:32:20.717636  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:32:20.717717  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:20.738048  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:20.853074  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:32:20.857246  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:32:20.857278  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:32:20.857289  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:32:20.857346  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:32:20.857423  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:32:20.857437  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:32:20.857537  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:32:20.866006  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:20.886225  324968 start.go:296] duration metric: took 168.70092ms for postStartSetup
	I1017 19:32:20.886334  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:32:20.886398  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:20.912756  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.034286  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:32:21.042383  324968 fix.go:56] duration metric: took 6.755322442s for fixHost
	I1017 19:32:21.042417  324968 start.go:83] releasing machines lock for "ha-254035-m02", held for 6.755380378s
	I1017 19:32:21.042509  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:21.067009  324968 out.go:179] * Found network options:
	I1017 19:32:21.069796  324968 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 19:32:21.072617  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:32:21.072667  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:32:21.072737  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:32:21.072783  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:21.072798  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:32:21.072853  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:21.106980  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.116734  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.321123  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:32:21.398151  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:32:21.398260  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:32:21.429985  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:32:21.430019  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:32:21.430052  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:32:21.430120  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:32:21.469545  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:32:21.499838  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:32:21.499915  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:32:21.546298  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:32:21.574508  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:32:22.043397  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:32:22.346332  324968 docker.go:234] disabling docker service ...
	I1017 19:32:22.346414  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:32:22.366415  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:32:22.385363  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:32:22.610088  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:32:22.882540  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:32:22.898584  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:32:22.925839  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:32:22.925982  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.941214  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:32:22.941380  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.952790  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.964392  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.976274  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:32:22.986631  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.999122  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:23.017402  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:23.031048  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:32:23.041313  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:32:23.054658  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:23.287821  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:32:23.539139  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:32:23.539262  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:32:23.543731  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:32:23.543842  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:32:23.550732  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:32:23.592317  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:32:23.592405  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:23.642337  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:23.710060  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:32:23.713120  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:32:23.716299  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:32:23.744818  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:32:23.750008  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:23.771597  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:32:23.771839  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:23.772139  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:23.805838  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:32:23.806449  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.3
	I1017 19:32:23.806468  324968 certs.go:195] generating shared ca certs ...
	I1017 19:32:23.806508  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:23.809795  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:32:23.809866  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:32:23.809883  324968 certs.go:257] generating profile certs ...
	I1017 19:32:23.809976  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:32:23.810032  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.5a836dc6
	I1017 19:32:23.810076  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:32:23.810089  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:32:23.810105  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:32:23.810121  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:32:23.810138  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:32:23.810155  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:32:23.810173  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:32:23.810185  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:32:23.810197  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:32:23.810249  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:32:23.810281  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:32:23.810294  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:32:23.810326  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:32:23.810354  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:32:23.810380  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:32:23.810425  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:23.810467  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:23.810484  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:32:23.810495  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:32:23.810560  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:23.830858  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:23.928800  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:32:23.933176  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:32:23.948803  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:32:23.953564  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:32:23.963833  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:32:23.970797  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:32:23.980707  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:32:23.985094  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:32:23.994719  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:32:23.998983  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:32:24.010610  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:32:24.015549  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:32:24.026675  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:32:24.046169  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:32:24.065010  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:32:24.083555  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:32:24.101835  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:32:24.121645  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:32:24.140364  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:32:24.158250  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:32:24.175078  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:32:24.192107  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:32:24.210093  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:32:24.227779  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:32:24.240287  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:32:24.253704  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:32:24.268887  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:32:24.281554  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:32:24.294030  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:32:24.307056  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:32:24.319713  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:32:24.326454  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:32:24.334896  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.338984  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.339069  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.382244  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:32:24.389973  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:32:24.397963  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.402178  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.402260  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.445450  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:32:24.454057  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:32:24.462416  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.469188  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.469265  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.513771  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:32:24.526391  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:32:24.532093  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:32:24.577438  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:32:24.619730  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:32:24.661938  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:32:24.706695  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:32:24.750711  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:32:24.792693  324968 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 19:32:24.792815  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:32:24.792847  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:32:24.792907  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:32:24.805902  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:24.805963  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:32:24.806034  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:32:24.815558  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:32:24.815637  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:32:24.823591  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:32:24.837169  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:32:24.849790  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:32:24.870243  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:32:24.879498  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:24.891396  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:25.079299  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:25.098478  324968 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:32:25.098820  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:25.104996  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:32:25.107746  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:25.272984  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:25.289585  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:32:25.289670  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:32:25.289939  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m02" to be "Ready" ...
	W1017 19:32:45.698726  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:47.846677  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:50.300191  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:52.794234  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	I1017 19:32:55.298996  324968 node_ready.go:49] node "ha-254035-m02" is "Ready"
	I1017 19:32:55.299027  324968 node_ready.go:38] duration metric: took 30.009056285s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:32:55.299042  324968 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:32:55.299101  324968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:32:55.311396  324968 api_server.go:72] duration metric: took 30.212852853s to wait for apiserver process to appear ...
	I1017 19:32:55.311421  324968 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:32:55.311440  324968 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:32:55.321736  324968 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:32:55.323225  324968 api_server.go:141] control plane version: v1.34.1
	I1017 19:32:55.323289  324968 api_server.go:131] duration metric: took 11.860591ms to wait for apiserver health ...
	I1017 19:32:55.323326  324968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:32:55.332734  324968 system_pods.go:59] 26 kube-system pods found
	I1017 19:32:55.332788  324968 system_pods.go:61] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.332797  324968 system_pods.go:61] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.332809  324968 system_pods.go:61] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:32:55.332819  324968 system_pods.go:61] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:32:55.332827  324968 system_pods.go:61] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:32:55.332832  324968 system_pods.go:61] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:32:55.332845  324968 system_pods.go:61] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.332850  324968 system_pods.go:61] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:32:55.332863  324968 system_pods.go:61] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.332872  324968 system_pods.go:61] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:32:55.332881  324968 system_pods.go:61] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:32:55.332886  324968 system_pods.go:61] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:32:55.332893  324968 system_pods.go:61] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.332905  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.332913  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:32:55.332921  324968 system_pods.go:61] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.332931  324968 system_pods.go:61] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.332936  324968 system_pods.go:61] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:32:55.332941  324968 system_pods.go:61] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:32:55.332953  324968 system_pods.go:61] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.332964  324968 system_pods.go:61] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.332973  324968 system_pods.go:61] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:32:55.332981  324968 system_pods.go:61] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:32:55.332985  324968 system_pods.go:61] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:32:55.332989  324968 system_pods.go:61] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:32:55.333000  324968 system_pods.go:61] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:32:55.333009  324968 system_pods.go:74] duration metric: took 9.659246ms to wait for pod list to return data ...
	I1017 19:32:55.333022  324968 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:32:55.344111  324968 default_sa.go:45] found service account: "default"
	I1017 19:32:55.344138  324968 default_sa.go:55] duration metric: took 11.10916ms for default service account to be created ...
	I1017 19:32:55.344149  324968 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:32:55.351885  324968 system_pods.go:86] 26 kube-system pods found
	I1017 19:32:55.351922  324968 system_pods.go:89] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.351933  324968 system_pods.go:89] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.351940  324968 system_pods.go:89] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:32:55.351947  324968 system_pods.go:89] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:32:55.351952  324968 system_pods.go:89] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:32:55.351957  324968 system_pods.go:89] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:32:55.351966  324968 system_pods.go:89] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.351971  324968 system_pods.go:89] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:32:55.351986  324968 system_pods.go:89] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.351997  324968 system_pods.go:89] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:32:55.352003  324968 system_pods.go:89] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:32:55.352010  324968 system_pods.go:89] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:32:55.352019  324968 system_pods.go:89] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.352031  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.352036  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:32:55.352043  324968 system_pods.go:89] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.352051  324968 system_pods.go:89] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.352056  324968 system_pods.go:89] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:32:55.352062  324968 system_pods.go:89] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:32:55.352068  324968 system_pods.go:89] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.352086  324968 system_pods.go:89] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.352091  324968 system_pods.go:89] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:32:55.352096  324968 system_pods.go:89] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:32:55.352100  324968 system_pods.go:89] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:32:55.352108  324968 system_pods.go:89] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:32:55.352116  324968 system_pods.go:89] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:32:55.352123  324968 system_pods.go:126] duration metric: took 7.969634ms to wait for k8s-apps to be running ...
	I1017 19:32:55.352135  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:32:55.352192  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:32:55.367145  324968 system_svc.go:56] duration metric: took 14.999806ms WaitForService to wait for kubelet
	I1017 19:32:55.367171  324968 kubeadm.go:586] duration metric: took 30.268632021s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:32:55.367192  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:32:55.370727  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370762  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370773  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370778  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370782  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370786  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370790  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370793  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370798  324968 node_conditions.go:105] duration metric: took 3.600536ms to run NodePressure ...
	I1017 19:32:55.370811  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:32:55.370845  324968 start.go:255] writing updated cluster config ...
	I1017 19:32:55.374424  324968 out.go:203] 
	I1017 19:32:55.377636  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:55.377758  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.381262  324968 out.go:179] * Starting "ha-254035-m03" control-plane node in "ha-254035" cluster
	I1017 19:32:55.385137  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:55.388169  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:55.391014  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:55.391065  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:55.391130  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:55.391213  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:55.391250  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:55.391408  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.410277  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:55.410300  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:55.410323  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:55.410347  324968 start.go:360] acquireMachinesLock for ha-254035-m03: {Name:mked9f1e3aab9db3df3b59f9799fd7eb1b9dc756 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:55.410421  324968 start.go:364] duration metric: took 54.473µs to acquireMachinesLock for "ha-254035-m03"
	I1017 19:32:55.410445  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:55.410454  324968 fix.go:54] fixHost starting: m03
	I1017 19:32:55.410732  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:32:55.427703  324968 fix.go:112] recreateIfNeeded on ha-254035-m03: state=Stopped err=<nil>
	W1017 19:32:55.427730  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:55.431363  324968 out.go:252] * Restarting existing docker container for "ha-254035-m03" ...
	I1017 19:32:55.431457  324968 cli_runner.go:164] Run: docker start ha-254035-m03
	I1017 19:32:55.755807  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:32:55.777127  324968 kic.go:430] container "ha-254035-m03" state is running.
	I1017 19:32:55.777489  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:32:55.800244  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.800494  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:55.800582  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:55.829783  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:55.830097  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:55.830107  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:55.830700  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:59.026446  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m03
	
	I1017 19:32:59.026469  324968 ubuntu.go:182] provisioning hostname "ha-254035-m03"
	I1017 19:32:59.026531  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:59.057027  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:59.057341  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:59.057359  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m03 && echo "ha-254035-m03" | sudo tee /etc/hostname
	I1017 19:32:59.282090  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m03
	
	I1017 19:32:59.282168  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:59.325073  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:59.325398  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:59.325420  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:59.509111  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:59.509181  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:59.509265  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:59.509297  324968 provision.go:84] configureAuth start
	I1017 19:32:59.509400  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:32:59.548783  324968 provision.go:143] copyHostCerts
	I1017 19:32:59.548834  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:59.548871  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:59.548878  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:59.548957  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:59.549040  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:59.549072  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:59.549078  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:59.549106  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:59.549151  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:59.549168  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:59.549172  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:59.549195  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:59.549242  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m03 san=[127.0.0.1 192.168.49.4 ha-254035-m03 localhost minikube]
	I1017 19:33:00.043691  324968 provision.go:177] copyRemoteCerts
	I1017 19:33:00.043871  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:33:00.043944  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.064471  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:00.223369  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:33:00.223446  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:33:00.260611  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:33:00.260683  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:33:00.317143  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:33:00.317306  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:33:00.385743  324968 provision.go:87] duration metric: took 876.417393ms to configureAuth
	I1017 19:33:00.385819  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:33:00.386115  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:00.386276  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.432179  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:00.432495  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:33:00.432512  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:33:00.901503  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:33:00.901591  324968 machine.go:96] duration metric: took 5.101084009s to provisionDockerMachine
	I1017 19:33:00.901618  324968 start.go:293] postStartSetup for "ha-254035-m03" (driver="docker")
	I1017 19:33:00.901662  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:33:00.901753  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:33:00.901835  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.927269  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.051646  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:33:01.055666  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:33:01.055692  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:33:01.055704  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:33:01.055763  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:33:01.055854  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:33:01.055866  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:33:01.055965  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:33:01.066853  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:01.101261  324968 start.go:296] duration metric: took 199.597831ms for postStartSetup
	I1017 19:33:01.101355  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:33:01.101408  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.130630  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.323449  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:33:01.379781  324968 fix.go:56] duration metric: took 5.969318931s for fixHost
	I1017 19:33:01.379809  324968 start.go:83] releasing machines lock for "ha-254035-m03", held for 5.969375603s
	I1017 19:33:01.379881  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:33:01.416934  324968 out.go:179] * Found network options:
	I1017 19:33:01.419424  324968 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1017 19:33:01.422873  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422914  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422951  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422967  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:33:01.423035  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:33:01.423092  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.423496  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:33:01.423560  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.460787  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.468755  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.901807  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:33:02.054376  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:33:02.054456  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:33:02.063698  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:33:02.063723  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:33:02.063757  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:33:02.063816  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:33:02.083121  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:33:02.099886  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:33:02.099962  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:33:02.129631  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:33:02.146247  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:33:02.487383  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:33:02.778663  324968 docker.go:234] disabling docker service ...
	I1017 19:33:02.778765  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:33:02.797150  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:33:02.816103  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:33:03.072265  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:33:03.311051  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:33:03.337034  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:33:03.367080  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:33:03.367228  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.379211  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:33:03.379292  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.403390  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.417512  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.434353  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:33:03.450504  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.465403  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.497155  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.516048  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:33:03.527113  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:33:03.546234  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:03.821017  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:33:05.091469  324968 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.270414549s)
	I1017 19:33:05.091496  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:33:05.091552  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:33:05.096822  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:33:05.096899  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:33:05.102601  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:33:05.133868  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:33:05.133956  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:05.169578  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:05.203999  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:33:05.206796  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:33:05.209777  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 19:33:05.212751  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:33:05.237841  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:33:05.242830  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:05.255230  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:33:05.255472  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:05.255718  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:33:05.273658  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:33:05.273934  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.4
	I1017 19:33:05.273942  324968 certs.go:195] generating shared ca certs ...
	I1017 19:33:05.273956  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:33:05.274063  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:33:05.274105  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:33:05.274111  324968 certs.go:257] generating profile certs ...
	I1017 19:33:05.274183  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:33:05.274262  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.db0a5916
	I1017 19:33:05.274301  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:33:05.274310  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:33:05.274333  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:33:05.274345  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:33:05.274357  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:33:05.274367  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:33:05.274379  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:33:05.274397  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:33:05.274409  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:33:05.274457  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:33:05.274485  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:33:05.274493  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:33:05.274518  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:33:05.274539  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:33:05.274559  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:33:05.274597  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:05.274622  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.274637  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.274648  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.274703  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:33:05.302509  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:33:05.404899  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:33:05.408751  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:33:05.417079  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:33:05.420443  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:33:05.429786  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:33:05.433515  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:33:05.442432  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:33:05.446029  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:33:05.456258  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:33:05.460045  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:33:05.468819  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:33:05.473279  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:33:05.482460  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:33:05.502746  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:33:05.521060  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:33:05.540206  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:33:05.559261  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:33:05.579914  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:33:05.607376  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:33:05.624208  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:33:05.643462  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:33:05.663238  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:33:05.685107  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:33:05.703927  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:33:05.716945  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:33:05.730309  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:33:05.744332  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:33:05.760823  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:33:05.781849  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:33:05.797383  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:33:05.815449  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:33:05.822374  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:33:05.830919  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.835675  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.835801  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.879325  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:33:05.888083  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:33:05.896261  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.900178  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.900239  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.943707  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:33:05.952618  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:33:05.961373  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.964981  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.965094  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:33:06.008396  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:33:06.017978  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:33:06.022220  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:33:06.064442  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:33:06.106411  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:33:06.147611  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:33:06.191689  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:33:06.235810  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:33:06.278610  324968 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1017 19:33:06.278711  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:33:06.278740  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:33:06.278801  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:33:06.292033  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:33:06.292094  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:33:06.292151  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:33:06.300562  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:33:06.300652  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:33:06.314364  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:33:06.329602  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:33:06.360017  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:33:06.379948  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:33:06.383943  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:06.395455  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:06.558780  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:06.573849  324968 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:33:06.574138  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:06.579819  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:33:06.582763  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:06.726699  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:06.743509  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:33:06.743622  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:33:06.743944  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m03" to be "Ready" ...
	W1017 19:33:08.748353  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:11.248113  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:13.747938  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:16.248008  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:18.248671  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:20.249311  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:22.747279  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:24.747653  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:26.749385  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	I1017 19:33:27.747523  324968 node_ready.go:49] node "ha-254035-m03" is "Ready"
	I1017 19:33:27.747558  324968 node_ready.go:38] duration metric: took 21.003579566s for node "ha-254035-m03" to be "Ready" ...
	I1017 19:33:27.747571  324968 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:33:27.747631  324968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:33:27.766700  324968 api_server.go:72] duration metric: took 21.192473888s to wait for apiserver process to appear ...
	I1017 19:33:27.766729  324968 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:33:27.766753  324968 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:33:27.775571  324968 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:33:27.776498  324968 api_server.go:141] control plane version: v1.34.1
	I1017 19:33:27.776585  324968 api_server.go:131] duration metric: took 9.846294ms to wait for apiserver health ...
	I1017 19:33:27.776595  324968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:33:27.783374  324968 system_pods.go:59] 26 kube-system pods found
	I1017 19:33:27.783414  324968 system_pods.go:61] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running
	I1017 19:33:27.783426  324968 system_pods.go:61] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:33:27.783431  324968 system_pods.go:61] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:33:27.783438  324968 system_pods.go:61] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running
	I1017 19:33:27.783442  324968 system_pods.go:61] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:33:27.783446  324968 system_pods.go:61] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:33:27.783450  324968 system_pods.go:61] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running
	I1017 19:33:27.783455  324968 system_pods.go:61] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:33:27.783464  324968 system_pods.go:61] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running
	I1017 19:33:27.783469  324968 system_pods.go:61] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running
	I1017 19:33:27.783480  324968 system_pods.go:61] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:33:27.783484  324968 system_pods.go:61] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:33:27.783489  324968 system_pods.go:61] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running
	I1017 19:33:27.783500  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running
	I1017 19:33:27.783505  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:33:27.783509  324968 system_pods.go:61] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running
	I1017 19:33:27.783519  324968 system_pods.go:61] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running
	I1017 19:33:27.783524  324968 system_pods.go:61] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:33:27.783528  324968 system_pods.go:61] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:33:27.783532  324968 system_pods.go:61] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running
	I1017 19:33:27.783536  324968 system_pods.go:61] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running
	I1017 19:33:27.783541  324968 system_pods.go:61] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:33:27.783545  324968 system_pods.go:61] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:33:27.783552  324968 system_pods.go:61] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:33:27.783556  324968 system_pods.go:61] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:33:27.783564  324968 system_pods.go:61] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running
	I1017 19:33:27.783569  324968 system_pods.go:74] duration metric: took 6.965509ms to wait for pod list to return data ...
	I1017 19:33:27.783582  324968 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:33:27.788939  324968 default_sa.go:45] found service account: "default"
	I1017 19:33:27.788978  324968 default_sa.go:55] duration metric: took 5.380156ms for default service account to be created ...
	I1017 19:33:27.788989  324968 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:33:27.884397  324968 system_pods.go:86] 26 kube-system pods found
	I1017 19:33:27.884440  324968 system_pods.go:89] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running
	I1017 19:33:27.884450  324968 system_pods.go:89] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:33:27.884456  324968 system_pods.go:89] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:33:27.884462  324968 system_pods.go:89] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running
	I1017 19:33:27.884466  324968 system_pods.go:89] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:33:27.884475  324968 system_pods.go:89] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:33:27.884478  324968 system_pods.go:89] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running
	I1017 19:33:27.884482  324968 system_pods.go:89] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:33:27.884494  324968 system_pods.go:89] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running
	I1017 19:33:27.884505  324968 system_pods.go:89] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running
	I1017 19:33:27.884525  324968 system_pods.go:89] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:33:27.884531  324968 system_pods.go:89] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:33:27.884535  324968 system_pods.go:89] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running
	I1017 19:33:27.884540  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running
	I1017 19:33:27.884545  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:33:27.884559  324968 system_pods.go:89] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running
	I1017 19:33:27.884563  324968 system_pods.go:89] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running
	I1017 19:33:27.884567  324968 system_pods.go:89] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:33:27.884571  324968 system_pods.go:89] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:33:27.884602  324968 system_pods.go:89] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running
	I1017 19:33:27.884606  324968 system_pods.go:89] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running
	I1017 19:33:27.884610  324968 system_pods.go:89] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:33:27.884614  324968 system_pods.go:89] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:33:27.884618  324968 system_pods.go:89] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:33:27.884622  324968 system_pods.go:89] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:33:27.884630  324968 system_pods.go:89] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running
	I1017 19:33:27.884636  324968 system_pods.go:126] duration metric: took 95.641254ms to wait for k8s-apps to be running ...
	I1017 19:33:27.884659  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:33:27.884730  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:33:27.903571  324968 system_svc.go:56] duration metric: took 18.903653ms WaitForService to wait for kubelet
	I1017 19:33:27.903609  324968 kubeadm.go:586] duration metric: took 21.32938831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:33:27.903634  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:33:27.907627  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907667  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907680  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907685  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907689  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907694  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907697  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907701  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907706  324968 node_conditions.go:105] duration metric: took 4.066189ms to run NodePressure ...
	I1017 19:33:27.907719  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:33:27.907751  324968 start.go:255] writing updated cluster config ...
	I1017 19:33:27.911402  324968 out.go:203] 
	I1017 19:33:27.915521  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:27.915649  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:27.918913  324968 out.go:179] * Starting "ha-254035-m04" worker node in "ha-254035" cluster
	I1017 19:33:27.921713  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:33:27.924620  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:33:27.927532  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:33:27.927564  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:33:27.927567  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:33:27.927721  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:33:27.927731  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:33:27.927887  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:27.960833  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:33:27.960852  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:33:27.960865  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:33:27.960889  324968 start.go:360] acquireMachinesLock for ha-254035-m04: {Name:mk584e2cd96462cdaa6d1f2088a137ff40c48733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:33:27.960940  324968 start.go:364] duration metric: took 36.438µs to acquireMachinesLock for "ha-254035-m04"
	I1017 19:33:27.960959  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:33:27.960964  324968 fix.go:54] fixHost starting: m04
	I1017 19:33:27.961255  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:33:27.995390  324968 fix.go:112] recreateIfNeeded on ha-254035-m04: state=Stopped err=<nil>
	W1017 19:33:27.995487  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:33:27.999207  324968 out.go:252] * Restarting existing docker container for "ha-254035-m04" ...
	I1017 19:33:27.999295  324968 cli_runner.go:164] Run: docker start ha-254035-m04
	I1017 19:33:28.394503  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:33:28.421995  324968 kic.go:430] container "ha-254035-m04" state is running.
	I1017 19:33:28.422449  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:28.441865  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:28.442116  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:33:28.442199  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:28.474872  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:28.475264  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:28.475277  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:33:28.476011  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:33:31.633234  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m04
	
	I1017 19:33:31.633323  324968 ubuntu.go:182] provisioning hostname "ha-254035-m04"
	I1017 19:33:31.633415  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:31.653177  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:31.653483  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:31.653500  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m04 && echo "ha-254035-m04" | sudo tee /etc/hostname
	I1017 19:33:31.837574  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m04
	
	I1017 19:33:31.837648  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:31.855639  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:31.855942  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:31.855960  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:33:32.021671  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:33:32.021700  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:33:32.021717  324968 ubuntu.go:190] setting up certificates
	I1017 19:33:32.021728  324968 provision.go:84] configureAuth start
	I1017 19:33:32.021791  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:32.058708  324968 provision.go:143] copyHostCerts
	I1017 19:33:32.058751  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:33:32.058799  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:33:32.058807  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:33:32.058887  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:33:32.058963  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:33:32.058981  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:33:32.058986  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:33:32.059011  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:33:32.059054  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:33:32.059070  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:33:32.059074  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:33:32.059096  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:33:32.059142  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m04 san=[127.0.0.1 192.168.49.5 ha-254035-m04 localhost minikube]
	I1017 19:33:32.315144  324968 provision.go:177] copyRemoteCerts
	I1017 19:33:32.315269  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:33:32.315346  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.336727  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:32.451884  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:33:32.451953  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:33:32.477259  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:33:32.477335  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:33:32.496861  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:33:32.496932  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:33:32.517190  324968 provision.go:87] duration metric: took 495.446144ms to configureAuth
	I1017 19:33:32.517214  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:33:32.517497  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:32.517606  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.538066  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:32.538377  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:32.538397  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:33:32.868308  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:33:32.868331  324968 machine.go:96] duration metric: took 4.426196148s to provisionDockerMachine
	I1017 19:33:32.868343  324968 start.go:293] postStartSetup for "ha-254035-m04" (driver="docker")
	I1017 19:33:32.868353  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:33:32.868430  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:33:32.868488  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.888400  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.003003  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:33:33.008119  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:33:33.008155  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:33:33.008169  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:33:33.008242  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:33:33.008327  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:33:33.008339  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:33:33.008446  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:33:33.018512  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:33.048826  324968 start.go:296] duration metric: took 180.468283ms for postStartSetup
	I1017 19:33:33.048927  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:33:33.048979  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.068864  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.183386  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:33:33.188620  324968 fix.go:56] duration metric: took 5.227645919s for fixHost
	I1017 19:33:33.188649  324968 start.go:83] releasing machines lock for "ha-254035-m04", held for 5.227700884s
	I1017 19:33:33.188718  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:33.212152  324968 out.go:179] * Found network options:
	I1017 19:33:33.215093  324968 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1017 19:33:33.217835  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217871  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217882  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217906  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217916  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217926  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:33:33.217995  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:33:33.218040  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.218316  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:33:33.218377  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.247548  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.256825  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.415645  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:33:33.492514  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:33:33.492637  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:33:33.500683  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:33:33.500716  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:33:33.500752  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:33:33.500801  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:33:33.517445  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:33:33.537937  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:33:33.538053  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:33:33.556447  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:33:33.576435  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:33:33.721164  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:33:33.856018  324968 docker.go:234] disabling docker service ...
	I1017 19:33:33.856163  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:33:33.874251  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:33:33.889153  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:33:34.059244  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:33:34.205588  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:33:34.223596  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:33:34.248335  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:33:34.248449  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.259664  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:33:34.259750  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.274225  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.284260  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.293374  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:33:34.301939  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.313190  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.322270  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.335994  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:33:34.345500  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:33:34.355597  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:34.485902  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:33:34.658593  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:33:34.658711  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:33:34.663315  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:33:34.663396  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:33:34.667245  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:33:34.704265  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:33:34.704411  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:34.738612  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:34.775046  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:33:34.777914  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:33:34.780845  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 19:33:34.783723  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1017 19:33:34.786627  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:33:34.808635  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:33:34.815185  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:34.827225  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:33:34.827480  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:34.827743  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:33:34.847031  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:33:34.847380  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.5
	I1017 19:33:34.847390  324968 certs.go:195] generating shared ca certs ...
	I1017 19:33:34.847415  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:33:34.847641  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:33:34.847708  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:33:34.847720  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:33:34.847749  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:33:34.847765  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:33:34.847775  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:33:34.847869  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:33:34.847922  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:33:34.847932  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:33:34.847959  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:33:34.847999  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:33:34.848045  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:33:34.848123  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:34.848155  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:33:34.848175  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:33:34.848187  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:34.848206  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:33:34.868384  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:33:34.889303  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:33:34.915103  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:33:34.947695  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:33:34.970689  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:33:34.991429  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:33:35.015821  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:33:35.023417  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:33:35.033117  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.038047  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.038163  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.080117  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:33:35.088886  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:33:35.098283  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.103083  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.103169  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.146427  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:33:35.160483  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:33:35.172663  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.177994  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.178116  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.221220  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:33:35.236438  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:33:35.243682  324968 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:33:35.243736  324968 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1017 19:33:35.243840  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:33:35.243919  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:33:35.253526  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:33:35.253625  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1017 19:33:35.262623  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:33:35.276015  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:33:35.290622  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:33:35.294428  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:35.304725  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:35.455305  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:35.471222  324968 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1017 19:33:35.471611  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:35.476720  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:33:35.479857  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:35.599550  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:35.615050  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:33:35.615120  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:33:35.615344  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m04" to be "Ready" ...
	W1017 19:33:37.619036  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	W1017 19:33:39.619924  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	W1017 19:33:42.120954  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	I1017 19:33:42.619614  324968 node_ready.go:49] node "ha-254035-m04" is "Ready"
	I1017 19:33:42.619639  324968 node_ready.go:38] duration metric: took 7.004273155s for node "ha-254035-m04" to be "Ready" ...
	I1017 19:33:42.619652  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:33:42.619704  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:33:42.643671  324968 system_svc.go:56] duration metric: took 24.010635ms WaitForService to wait for kubelet
	I1017 19:33:42.643702  324968 kubeadm.go:586] duration metric: took 7.172435361s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:33:42.643720  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:33:42.658471  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658503  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658515  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658520  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658524  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658528  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658532  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658536  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658541  324968 node_conditions.go:105] duration metric: took 14.815335ms to run NodePressure ...
	I1017 19:33:42.658553  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:33:42.658578  324968 start.go:255] writing updated cluster config ...
	I1017 19:33:42.658896  324968 ssh_runner.go:195] Run: rm -f paused
	I1017 19:33:42.666036  324968 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:33:42.666578  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:33:42.748115  324968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gfklr" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.799614  324968 pod_ready.go:94] pod "coredns-66bc5c9577-gfklr" is "Ready"
	I1017 19:33:42.799652  324968 pod_ready.go:86] duration metric: took 51.505206ms for pod "coredns-66bc5c9577-gfklr" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.799662  324968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wbgc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.845846  324968 pod_ready.go:94] pod "coredns-66bc5c9577-wbgc8" is "Ready"
	I1017 19:33:42.845885  324968 pod_ready.go:86] duration metric: took 46.206115ms for pod "coredns-66bc5c9577-wbgc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.863051  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.871909  324968 pod_ready.go:94] pod "etcd-ha-254035" is "Ready"
	I1017 19:33:42.871935  324968 pod_ready.go:86] duration metric: took 8.855813ms for pod "etcd-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.871945  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.880198  324968 pod_ready.go:94] pod "etcd-ha-254035-m02" is "Ready"
	I1017 19:33:42.880226  324968 pod_ready.go:86] duration metric: took 8.274439ms for pod "etcd-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.880236  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.067322  324968 request.go:683] "Waited before sending request" delay="183.325668ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:43.071041  324968 pod_ready.go:94] pod "etcd-ha-254035-m03" is "Ready"
	I1017 19:33:43.071067  324968 pod_ready.go:86] duration metric: took 190.824595ms for pod "etcd-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.267504  324968 request.go:683] "Waited before sending request" delay="196.34087ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1017 19:33:43.271686  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.468020  324968 request.go:683] "Waited before sending request" delay="196.217403ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035"
	I1017 19:33:43.666979  324968 request.go:683] "Waited before sending request" delay="194.232504ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:43.670115  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035" is "Ready"
	I1017 19:33:43.670144  324968 pod_ready.go:86] duration metric: took 398.430494ms for pod "kube-apiserver-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.670153  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.867552  324968 request.go:683] "Waited before sending request" delay="197.322859ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035-m02"
	I1017 19:33:44.067901  324968 request.go:683] "Waited before sending request" delay="193.273769ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:44.071414  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035-m02" is "Ready"
	I1017 19:33:44.071442  324968 pod_ready.go:86] duration metric: took 401.282299ms for pod "kube-apiserver-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.071453  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.267920  324968 request.go:683] "Waited before sending request" delay="196.393406ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035-m03"
	I1017 19:33:44.467967  324968 request.go:683] "Waited before sending request" delay="196.317182ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:44.472041  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035-m03" is "Ready"
	I1017 19:33:44.472068  324968 pod_ready.go:86] duration metric: took 400.608635ms for pod "kube-apiserver-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.667472  324968 request.go:683] "Waited before sending request" delay="195.295893ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1017 19:33:44.671549  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.868014  324968 request.go:683] "Waited before sending request" delay="196.366601ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035"
	I1017 19:33:45.067086  324968 request.go:683] "Waited before sending request" delay="193.311224ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:45.072221  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035" is "Ready"
	I1017 19:33:45.072250  324968 pod_ready.go:86] duration metric: took 400.67411ms for pod "kube-controller-manager-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.072261  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.267682  324968 request.go:683] "Waited before sending request" delay="195.335416ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035-m02"
	I1017 19:33:45.467614  324968 request.go:683] "Waited before sending request" delay="188.393045ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:45.470975  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035-m02" is "Ready"
	I1017 19:33:45.471007  324968 pod_ready.go:86] duration metric: took 398.736291ms for pod "kube-controller-manager-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.471017  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.667358  324968 request.go:683] "Waited before sending request" delay="196.263104ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035-m03"
	I1017 19:33:45.867478  324968 request.go:683] "Waited before sending request" delay="196.63098ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:45.870372  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035-m03" is "Ready"
	I1017 19:33:45.870427  324968 pod_ready.go:86] duration metric: took 399.402071ms for pod "kube-controller-manager-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.067916  324968 request.go:683] "Waited before sending request" delay="197.353037ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1017 19:33:46.071965  324968 pod_ready.go:83] waiting for pod "kube-proxy-548b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.267426  324968 request.go:683] "Waited before sending request" delay="195.355338ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-548b2"
	I1017 19:33:46.467392  324968 request.go:683] "Waited before sending request" delay="193.351461ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:46.470716  324968 pod_ready.go:94] pod "kube-proxy-548b2" is "Ready"
	I1017 19:33:46.470745  324968 pod_ready.go:86] duration metric: took 398.750601ms for pod "kube-proxy-548b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.470755  324968 pod_ready.go:83] waiting for pod "kube-proxy-b4fr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.667046  324968 request.go:683] "Waited before sending request" delay="196.219848ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b4fr6"
	I1017 19:33:46.867280  324968 request.go:683] "Waited before sending request" delay="196.299896ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:46.870670  324968 pod_ready.go:94] pod "kube-proxy-b4fr6" is "Ready"
	I1017 19:33:46.870707  324968 pod_ready.go:86] duration metric: took 399.946057ms for pod "kube-proxy-b4fr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.870717  324968 pod_ready.go:83] waiting for pod "kube-proxy-fr5ts" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:47.067054  324968 request.go:683] "Waited before sending request" delay="196.240361ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fr5ts"
	I1017 19:33:47.267565  324968 request.go:683] "Waited before sending request" delay="196.190762ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:47.467316  324968 request.go:683] "Waited before sending request" delay="96.206992ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fr5ts"
	I1017 19:33:47.667564  324968 request.go:683] "Waited before sending request" delay="186.261475ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:48.067382  324968 request.go:683] "Waited before sending request" delay="186.267596ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:48.467049  324968 request.go:683] "Waited before sending request" delay="92.145258ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	W1017 19:33:48.877689  324968 pod_ready.go:104] pod "kube-proxy-fr5ts" is not "Ready", error: <nil>
	W1017 19:33:50.877808  324968 pod_ready.go:104] pod "kube-proxy-fr5ts" is not "Ready", error: <nil>
	I1017 19:33:52.377837  324968 pod_ready.go:94] pod "kube-proxy-fr5ts" is "Ready"
	I1017 19:33:52.377866  324968 pod_ready.go:86] duration metric: took 5.507143006s for pod "kube-proxy-fr5ts" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.377876  324968 pod_ready.go:83] waiting for pod "kube-proxy-k56cv" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.386625  324968 pod_ready.go:94] pod "kube-proxy-k56cv" is "Ready"
	I1017 19:33:52.386655  324968 pod_ready.go:86] duration metric: took 8.770737ms for pod "kube-proxy-k56cv" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.390245  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.467536  324968 request.go:683] "Waited before sending request" delay="77.200252ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035"
	I1017 19:33:52.667089  324968 request.go:683] "Waited before sending request" delay="193.299146ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:52.670454  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035" is "Ready"
	I1017 19:33:52.670484  324968 pod_ready.go:86] duration metric: took 280.216212ms for pod "kube-scheduler-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.670495  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.867921  324968 request.go:683] "Waited before sending request" delay="197.327438ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035-m02"
	I1017 19:33:53.067947  324968 request.go:683] "Waited before sending request" delay="195.176914ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:53.072896  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035-m02" is "Ready"
	I1017 19:33:53.072972  324968 pod_ready.go:86] duration metric: took 402.46965ms for pod "kube-scheduler-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.072997  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.267273  324968 request.go:683] "Waited before sending request" delay="194.142538ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035-m03"
	I1017 19:33:53.467118  324968 request.go:683] "Waited before sending request" delay="196.200739ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:53.470125  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035-m03" is "Ready"
	I1017 19:33:53.470152  324968 pod_ready.go:86] duration metric: took 397.132807ms for pod "kube-scheduler-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.470163  324968 pod_ready.go:40] duration metric: took 10.804092337s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:33:53.525625  324968 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 19:33:53.530847  324968 out.go:179] * Done! kubectl is now configured to use "ha-254035" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:33:01 ha-254035 crio[667]: time="2025-10-17T19:33:01.657638061Z" level=info msg="Started container" PID=1327 containerID=e9ece41337b80cfabb4196dc2d55dc644a949f49cd22450cf623b7f5257d5d69 description=kube-system/kindnet-gzzsg/kindnet-cni id=1467213a-df01-47f7-91a8-c9ecfa2692be name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe908ac1b77150ea99b48733349b105097380b5cd2e2f243156591744040d978
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.209485703Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.212893465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.212927827Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.21295117Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216661947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216697064Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216721523Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220161292Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220191347Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220215756Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.223221953Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.223254084Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:27 ha-254035 conmon[1135]: conmon 0cc2287088bc871e7f4d <ninfo>: container 1139 exited with status 1
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.068588792Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b7b509f3-b012-49ed-9e6d-e0ab750c4b6b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.07344856Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25fe3696-e90b-4a83-a3ad-33aa6af72f3d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.077367011Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=28e7f811-dec4-4fcb-9722-3a341888b632 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.077693042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.096972398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.097208428Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/17cd3234a8a982607354e16eb6b88983eecf7edea137eb96fbc8cd597e6577e2/merged/etc/passwd: no such file or directory"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.09724453Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/17cd3234a8a982607354e16eb6b88983eecf7edea137eb96fbc8cd597e6577e2/merged/etc/group: no such file or directory"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.108385903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.143116992Z" level=info msg="Created container f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e: kube-system/storage-provisioner/storage-provisioner" id=28e7f811-dec4-4fcb-9722-3a341888b632 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.144104625Z" level=info msg="Starting container: f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e" id=e482d8e9-fc6c-4e49-a1a6-8af83382da5d name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.153409034Z" level=info msg="Started container" PID=1450 containerID=f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e description=kube-system/storage-provisioner/storage-provisioner id=e482d8e9-fc6c-4e49-a1a6-8af83382da5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	f03a6dda4443a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Running             storage-provisioner       4                   ebb6a1f53c483       storage-provisioner                 kube-system
	e9ece41337b80       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago       Running             kindnet-cni               2                   fe908ac1b7715       kindnet-gzzsg                       kube-system
	83532ba0435f2       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago       Running             busybox                   2                   0240e4c18c32a       busybox-7b57f96db7-nc6x2            default
	db8d02bae2fa1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   2                   507d7b819debe       coredns-66bc5c9577-wbgc8            kube-system
	706bee2267664       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   2                   c6367bcfd35d4       coredns-66bc5c9577-gfklr            kube-system
	d51ad27d42179       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago       Running             kube-proxy                2                   7bb73f9365e64       kube-proxy-548b2                    kube-system
	0cc2287088bc8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Exited              storage-provisioner       3                   ebb6a1f53c483       storage-provisioner                 kube-system
	cd9dec0514b24       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   2 minutes ago       Running             kube-controller-manager   7                   251b6be3c0c4f       kube-controller-manager-ha-254035   kube-system
	d713edbb381bb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Exited              kube-controller-manager   6                   251b6be3c0c4f       kube-controller-manager-ha-254035   kube-system
	fb534fcdb2d89       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   3 minutes ago       Running             kube-apiserver            3                   0fd33e0b5d3e5       kube-apiserver-ha-254035            kube-system
	ab6180a80f68d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   3 minutes ago       Running             etcd                      2                   bc1edea2f668b       etcd-ha-254035                      kube-system
	c4609fc3fd1c0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   3 minutes ago       Running             kube-scheduler            2                   32d4263a101a2       kube-scheduler-ha-254035            kube-system
	0652fd27f5bff       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   3 minutes ago       Running             kube-vip                  1                   31afc78057fe9       kube-vip-ha-254035                  kube-system
	
	
	==> coredns [706bee22676646b717cd807f92b3341bc3bee9a22195d1a96f63858b9fe3f381] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35042 - 59078 "HINFO IN 7580743585985535806.8578026735020374478. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014332173s
	
	
	==> coredns [db8d02bae2fa1a6f368ea962e35a1111cb4230bcadf4709cf7545ace2d4272d6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35443 - 54421 "HINFO IN 8550404136984308969.4709042246801981974. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015029672s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-254035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_17_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:35:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-254035
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                eadb5c5f-dcbb-485c-aea7-3aa5b951fd9e
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-nc6x2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-gfklr             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 coredns-66bc5c9577-wbgc8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-ha-254035                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-gzzsg                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-254035             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-254035    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-548b2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-254035             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-254035                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 2m33s                  kube-proxy       
	  Normal   Starting                 9m31s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           17m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-254035 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m                     node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientMemory  3m18s (x8 over 3m18s)  kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m18s (x8 over 3m18s)  kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m18s (x8 over 3m18s)  kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m40s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           2m39s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           2m3s                   node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	
	
	Name:               ha-254035-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_18_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:18:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:35:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-254035-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6c5e97e0-fa27-407d-a976-b646e8a40ca5
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6xjlp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-254035-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-vss98                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-254035-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-254035-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-b4fr6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-254035-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-254035-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 2m12s                  kube-proxy       
	  Normal   RegisteredNode           16m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             12m                    node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           9m                     node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   NodeNotReady             8m10s                  node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   Starting                 3m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m15s (x8 over 3m15s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m15s (x8 over 3m15s)  kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m15s (x8 over 3m15s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m40s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           2m39s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           2m3s                   node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	
	
	Name:               ha-254035-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_20_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:19:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:35:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:35:09 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:35:09 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:35:09 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:35:09 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-254035-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2f343c58-0cc9-444a-bc88-7799c3ff52df
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-979zm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-254035-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-2k9kj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-254035-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-254035-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-k56cv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-254035-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-254035-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 108s                   kube-proxy       
	  Normal   RegisteredNode           15m                    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           9m                     node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   NodeNotReady             8m10s                  node-controller  Node ha-254035-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           2m40s                  node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           2m39s                  node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node ha-254035-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node ha-254035-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node ha-254035-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m3s                   node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	
	
	Name:               ha-254035-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_21_16_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:21:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:35:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-254035-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                12691412-a8b5-426e-846e-d6161e527ea6
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pwhwv       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-proxy-fr5ts    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 99s                  kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Warning  CgroupV1                 14m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x3 over 14m)    kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x3 over 14m)    kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x3 over 14m)    kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           14m                  node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeReady                13m                  kubelet          Node ha-254035-m04 status is now: NodeReady
	  Normal   RegisteredNode           12m                  node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           9m                   node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeNotReady             8m10s                node-controller  Node ha-254035-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m40s                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           2m39s                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           2m3s                 node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   Starting                 2m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  119s (x8 over 2m2s)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    119s (x8 over 2m2s)  kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s (x8 over 2m2s)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                  node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	
	
	Name:               ha-254035-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_34_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:34:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m05
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:35:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:35:27 +0000   Fri, 17 Oct 2025 19:34:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:35:27 +0000   Fri, 17 Oct 2025 19:34:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:35:27 +0000   Fri, 17 Oct 2025 19:34:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:35:27 +0000   Fri, 17 Oct 2025 19:35:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-254035-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                0d42d24d-7b77-4e0b-8b88-c22eb0bbccca
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-254035-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         45s
	  kube-system                 kindnet-6wxsk                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      48s
	  kube-system                 kube-apiserver-ha-254035-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-ha-254035-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-dschq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-scheduler-ha-254035-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-vip-ha-254035-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        43s   kube-proxy       
	  Normal  RegisteredNode  48s   node-controller  Node ha-254035-m05 event: Registered Node ha-254035-m05 in Controller
	  Normal  RegisteredNode  45s   node-controller  Node ha-254035-m05 event: Registered Node ha-254035-m05 in Controller
	  Normal  RegisteredNode  44s   node-controller  Node ha-254035-m05 event: Registered Node ha-254035-m05 in Controller
	  Normal  RegisteredNode  43s   node-controller  Node ha-254035-m05 event: Registered Node ha-254035-m05 in Controller
	
	
	==> dmesg <==
	[Oct17 18:34] overlayfs: idmapped layers are currently not supported
	[Oct17 18:35] overlayfs: idmapped layers are currently not supported
	[Oct17 18:36] overlayfs: idmapped layers are currently not supported
	[ +20.850590] overlayfs: idmapped layers are currently not supported
	[Oct17 18:38] overlayfs: idmapped layers are currently not supported
	[ +19.812679] overlayfs: idmapped layers are currently not supported
	[Oct17 18:39] overlayfs: idmapped layers are currently not supported
	[ +19.225178] overlayfs: idmapped layers are currently not supported
	[Oct17 18:40] overlayfs: idmapped layers are currently not supported
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 18:57] overlayfs: idmapped layers are currently not supported
	[Oct17 19:03] overlayfs: idmapped layers are currently not supported
	[Oct17 19:04] overlayfs: idmapped layers are currently not supported
	[Oct17 19:17] overlayfs: idmapped layers are currently not supported
	[Oct17 19:18] overlayfs: idmapped layers are currently not supported
	[Oct17 19:19] overlayfs: idmapped layers are currently not supported
	[Oct17 19:21] overlayfs: idmapped layers are currently not supported
	[Oct17 19:22] overlayfs: idmapped layers are currently not supported
	[Oct17 19:23] overlayfs: idmapped layers are currently not supported
	[  +4.119232] overlayfs: idmapped layers are currently not supported
	[Oct17 19:32] overlayfs: idmapped layers are currently not supported
	[  +2.727676] overlayfs: idmapped layers are currently not supported
	[ +41.644994] overlayfs: idmapped layers are currently not supported
	[Oct17 19:33] overlayfs: idmapped layers are currently not supported
	[Oct17 19:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab6180a80f68dcb65397cf72c97a3f14b4b536aa865a3b252a4a6ebf62d58b59] <==
	{"level":"warn","ts":"2025-10-17T19:34:31.290023Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:31.392500Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d7447b558ebb0f55","error":"failed to write d7447b558ebb0f55 on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:36664: write: connection reset by peer)"}
	{"level":"warn","ts":"2025-10-17T19:34:31.393059Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.611714Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.685499Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"d7447b558ebb0f55","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-17T19:34:31.685558Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.785401Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.913669Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"d7447b558ebb0f55","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-10-17T19:34:31.913713Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.913724Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.917918Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:43.257432Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"warn","ts":"2025-10-17T19:34:44.259903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.910342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-5brbl\" limit:1 ","response":"range_response_count:1 size:3431"}
	{"level":"info","ts":"2025-10-17T19:34:44.260014Z","caller":"traceutil/trace.go:172","msg":"trace[1324092327] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-5brbl; range_end:; response_count:1; response_revision:3530; }","duration":"113.029945ms","start":"2025-10-17T19:34:44.146970Z","end":"2025-10-17T19:34:44.260000Z","steps":["trace[1324092327] 'agreement among raft nodes before linearized reading'  (duration: 112.817177ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:34:44.260233Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.277353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:34:44.260445Z","caller":"traceutil/trace.go:172","msg":"trace[1645675066] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3530; }","duration":"113.488053ms","start":"2025-10-17T19:34:44.146945Z","end":"2025-10-17T19:34:44.260433Z","steps":["trace[1645675066] 'agreement among raft nodes before linearized reading'  (duration: 113.25154ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:34:44.260706Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.783182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-6wxsk\" limit:1 ","response":"range_response_count:1 size:3694"}
	{"level":"info","ts":"2025-10-17T19:34:44.273502Z","caller":"traceutil/trace.go:172","msg":"trace[528875741] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-6wxsk; range_end:; response_count:1; response_revision:3530; }","duration":"126.569497ms","start":"2025-10-17T19:34:44.146909Z","end":"2025-10-17T19:34:44.273479Z","steps":["trace[528875741] 'agreement among raft nodes before linearized reading'  (duration: 113.720915ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:34:44.263030Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.203225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-h77vc\" limit:1 ","response":"range_response_count:1 size:4099"}
	{"level":"info","ts":"2025-10-17T19:34:44.273809Z","caller":"traceutil/trace.go:172","msg":"trace[249640023] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-h77vc; range_end:; response_count:1; response_revision:3530; }","duration":"126.986599ms","start":"2025-10-17T19:34:44.146813Z","end":"2025-10-17T19:34:44.273800Z","steps":["trace[249640023] 'agreement among raft nodes before linearized reading'  (duration: 114.610821ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:34:44.263260Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.647232ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-gl87l\" limit:1 ","response":"range_response_count:1 size:3694"}
	{"level":"info","ts":"2025-10-17T19:34:44.276382Z","caller":"traceutil/trace.go:172","msg":"trace[1762693465] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-gl87l; range_end:; response_count:1; response_revision:3530; }","duration":"129.479699ms","start":"2025-10-17T19:34:44.146889Z","end":"2025-10-17T19:34:44.276369Z","steps":["trace[1762693465] 'agreement among raft nodes before linearized reading'  (duration: 114.22256ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:34:44.609239Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-17T19:34:47.700295Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-17T19:35:01.023513Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"d7447b558ebb0f55","bytes":6735756,"size":"6.7 MB","took":"31.368053313s"}
	
	
	==> kernel <==
	 19:35:31 up  2:18,  0 user,  load average: 3.01, 2.69, 1.87
	Linux ha-254035 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9ece41337b80cfabb4196dc2d55dc644a949f49cd22450cf623b7f5257d5d69] <==
	I1017 19:35:02.207576       1 main.go:301] handling current node
	I1017 19:35:02.207626       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:35:02.207657       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:35:02.207813       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:35:02.207897       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:35:12.207740       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:35:12.207784       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:35:12.207966       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:35:12.207980       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:35:12.208050       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1017 19:35:12.208062       1 main.go:324] Node ha-254035-m05 has CIDR [10.244.4.0/24] 
	I1017 19:35:12.208475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:35:12.208588       1 main.go:301] handling current node
	I1017 19:35:12.208629       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:35:12.208671       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:35:22.215537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:35:22.215640       1 main.go:301] handling current node
	I1017 19:35:22.215678       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:35:22.215707       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:35:22.215912       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:35:22.215951       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:35:22.216044       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:35:22.216059       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:35:22.216115       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1017 19:35:22.216128       1 main.go:324] Node ha-254035-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [fb534fcdb2d895a4c9c908d2c41c5a3a49e1ba7a9a8c54cca3e0f68236d86194] <==
	{"level":"warn","ts":"2025-10-17T19:32:45.556106Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001deba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-17T19:32:45.556124Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028872c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1017 19:32:45.742745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:32:45.761612       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:32:45.766614       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 19:32:45.766727       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:32:45.766874       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 19:32:45.766889       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:32:45.772156       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:32:45.782338       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:32:45.782660       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:32:45.782735       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:32:45.786264       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:32:45.801116       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:32:45.801154       1 policy_source.go:240] refreshing policies
	I1017 19:32:45.801215       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:32:45.801340       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:32:45.823912       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1017 19:32:45.892067       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:32:46.104708       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:32:51.664034       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:32:51.782010       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:32:51.908184       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:32:52.058599       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:32:52.107924       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [cd9dec0514b2422e9e0e06a464213e0f38cdfce11c6ca20c97c479d028fcac71] <==
	I1017 19:32:51.704899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:32:51.705461       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:32:51.705774       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:32:51.705860       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:32:51.707308       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:32:51.708143       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:32:51.708196       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:32:51.713230       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:32:51.722295       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:32:51.793811       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m04"
	I1017 19:32:51.793885       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035"
	I1017 19:32:51.793911       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m02"
	I1017 19:32:51.793948       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m03"
	I1017 19:32:51.794411       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1017 19:32:56.794689       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 19:33:32.102831       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-m4bp9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-m4bp9\": the object has been modified; please apply your changes to the latest version and try again"
	I1017 19:33:32.116286       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9bc45666-7349-43f1-b1bc-8fe50797293b", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-m4bp9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-m4bp9": the object has been modified; please apply your changes to the latest version and try again
	I1017 19:33:42.572582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	E1017 19:34:43.072957       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-xwhmv failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-xwhmv\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1017 19:34:43.102626       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-xwhmv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-xwhmv\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1017 19:34:43.810556       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-254035-m05\" does not exist"
	I1017 19:34:43.811708       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	I1017 19:34:43.843170       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-254035-m05" podCIDRs=["10.244.4.0/24"]
	I1017 19:34:46.847409       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m05"
	I1017 19:35:28.007060       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	
	
	==> kube-controller-manager [d713edbb381bb7ac4baa67d925ebd85ec5ab61fa9319db2f03ba47d667e26940] <==
	I1017 19:32:15.577934       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:32:17.585378       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 19:32:17.585478       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:32:17.587388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 19:32:17.588088       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 19:32:17.588254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:32:17.588373       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1017 19:32:32.131519       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [d51ad27d42179adee09ff705d12ad5d15a734809e4732ad3eb1c4429dc7021e6] <==
	I1017 19:32:57.743934       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:32:57.902619       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:32:57.934204       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:32:57.934232       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:32:57.934302       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:32:58.002595       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:32:58.002661       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:32:58.008742       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:32:58.009306       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:32:58.009381       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:32:58.011974       1 config.go:200] "Starting service config controller"
	I1017 19:32:58.011999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:32:58.021529       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:32:58.021612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:32:58.021667       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:32:58.021695       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:32:58.021970       1 config.go:309] "Starting node config controller"
	I1017 19:32:58.021993       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:32:58.112358       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:32:58.122792       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:32:58.122780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:32:58.122830       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [c4609fc3fd1c0d5440395e0986380eb9eb076a0e1e1faa4ad132e67cd913032d] <==
	E1017 19:34:44.161707       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zh56p\": pod kube-proxy-zh56p is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-zh56p"
	I1017 19:34:44.164605       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zh56p" node="ha-254035-m05"
	E1017 19:34:44.195631       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dschq\": pod kube-proxy-dschq is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dschq" node="ha-254035-m05"
	E1017 19:34:44.200978       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d8f101a4-5151-4c21-8b54-e5bb2097eda0(kube-system/kube-proxy-dschq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-dschq"
	E1017 19:34:44.201073       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dschq\": pod kube-proxy-dschq is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-dschq"
	I1017 19:34:44.202488       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dschq" node="ha-254035-m05"
	E1017 19:34:44.277021       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5brbl\": pod kube-proxy-5brbl is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5brbl" node="ha-254035-m05"
	E1017 19:34:44.278335       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 0ce3c1ba-82f7-47c9-863a-b2da2399dcaa(kube-system/kube-proxy-5brbl) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-5brbl"
	E1017 19:34:44.278456       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5brbl\": pod kube-proxy-5brbl is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-5brbl"
	I1017 19:34:44.288718       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5brbl" node="ha-254035-m05"
	E1017 19:34:44.292265       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6wxsk\": pod kindnet-6wxsk is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-6wxsk" node="ha-254035-m05"
	E1017 19:34:44.292461       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d246ba27-9741-4566-ad25-03513a959e1f(kube-system/kindnet-6wxsk) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-6wxsk"
	E1017 19:34:44.293474       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6wxsk\": pod kindnet-6wxsk is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kindnet-6wxsk"
	E1017 19:34:44.292386       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-h77vc\": pod kindnet-h77vc is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-h77vc" node="ha-254035-m05"
	E1017 19:34:44.294925       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod daef16ff-3a08-48e6-bab5-f2be670e34d1(kube-system/kindnet-h77vc) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-h77vc"
	E1017 19:34:44.295786       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-h77vc\": pod kindnet-h77vc is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kindnet-h77vc"
	E1017 19:34:44.292415       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gl87l\": pod kindnet-gl87l is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-gl87l" node="ha-254035-m05"
	E1017 19:34:44.295845       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 125dbd8c-395b-479d-9509-5f1253f028f6(kube-system/kindnet-gl87l) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-gl87l"
	I1017 19:34:44.294690       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6wxsk" node="ha-254035-m05"
	E1017 19:34:44.311928       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gl87l\": pod kindnet-gl87l is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kindnet-gl87l"
	I1017 19:34:44.312035       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gl87l" node="ha-254035-m05"
	I1017 19:34:44.312481       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-h77vc" node="ha-254035-m05"
	E1017 19:34:45.117913       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ztwbl\": pod kube-proxy-ztwbl is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ztwbl" node="ha-254035-m05"
	E1017 19:34:45.118002       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ztwbl\": pod kube-proxy-ztwbl is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-ztwbl"
	E1017 19:34:45.265630       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kube-proxy-ztwbl\" not found" pod="kube-system/kube-proxy-ztwbl"
	
	
	==> kubelet <==
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.424411     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.424463     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.425112     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238 WatchSource:0}: Error finding container ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238: Status 404 returned error can't find the container with id ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.428343     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(4784cc20-6df7-4e32-bbfa-e0b3be4a1e83): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.428384     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="4784cc20-6df7-4e32-bbfa-e0b3be4a1e83"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.433597     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6 WatchSource:0}: Error finding container 507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6: Status 404 returned error can't find the container with id 507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.441352     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.441397     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.442165     802 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-254035\" already exists" pod="kube-system/kube-scheduler-ha-254035"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.458234     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673 WatchSource:0}: Error finding container 0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673: Status 404 returned error can't find the container with id 0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.468716     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-nc6x2_default(4ced2553-3c5f-4d67-ad3c-2ed34ab319ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.468759     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-nc6x2" podUID="4ced2553-3c5f-4d67-ad3c-2ed34ab319ef"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.722833     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-nc6x2_default(4ced2553-3c5f-4d67-ad3c-2ed34ab319ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.741101     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-nc6x2" podUID="4ced2553-3c5f-4d67-ad3c-2ed34ab319ef"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.749534     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-gfklr_kube-system(8bf2b43b-91c9-4531-a571-36060412860e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755626     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-gfklr" podUID="8bf2b43b-91c9-4531-a571-36060412860e"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755218     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(4784cc20-6df7-4e32-bbfa-e0b3be4a1e83): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755307     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755390     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-548b2_kube-system(4b772887-90df-4871-9343-69349bdda859): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755118     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757120     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757234     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757252     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="4784cc20-6df7-4e32-bbfa-e0b3be4a1e83"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757271     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-548b2" podUID="4b772887-90df-4871-9343-69349bdda859"
	Oct 17 19:33:28 ha-254035 kubelet[802]: I1017 19:33:28.066788     802 scope.go:117] "RemoveContainer" containerID="0cc2287088bc871e7f4dd5ef5a425a95862343c93ae9b170eadd77d685735b39"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-254035 -n ha-254035
helpers_test.go:269: (dbg) Run:  kubectl --context ha-254035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (90.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.368543829s)
ha_test.go:305: expected profile "ha-254035" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-254035\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-254035\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfssh
ares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-254035\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"I
P\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong
\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountM
Size\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-254035
helpers_test.go:243: (dbg) docker inspect ha-254035:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	        "Created": "2025-10-17T19:17:36.603472481Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 325091,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:32:05.992149801Z",
	            "FinishedAt": "2025-10-17T19:32:05.172940124Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/hosts",
	        "LogPath": "/var/lib/docker/containers/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8-json.log",
	        "Name": "/ha-254035",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-254035:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-254035",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8",
	                "LowerDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/253085d6544d06898aeb6c57eb0eec3096204e05add182dd9ecd66fe9c56ded5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-254035",
	                "Source": "/var/lib/docker/volumes/ha-254035/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-254035",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-254035",
	                "name.minikube.sigs.k8s.io": "ha-254035",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1b39170e4096374d7e684a87814d212baad95e741e4cc807dce61f43c877747",
	            "SandboxKey": "/var/run/docker/netns/b1b39170e409",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-254035": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:e2:15:6d:bc:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f667d9c3ea201faa6573d33bffc4907012785051d424eb86a31b1e09eb8b135",
	                    "EndpointID": "e9462a0e2e3d7837432ea03485390bfaae7ae9afbbbbc20020bc0ae2782b8ba7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-254035",
	                        "7f770318d5dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-254035 -n ha-254035
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 logs -n 25: (1.879582652s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp testdata/cp-test.txt ha-254035-m04:/home/docker/cp-test.txt                                                             │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035-m04.txt │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035:/home/docker/cp-test_ha-254035-m04_ha-254035.txt                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035.txt                                                 │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m02 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ cp      │ ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m03:/home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt               │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ ssh     │ ha-254035 ssh -n ha-254035-m03 sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:22 UTC │
	│ node    │ ha-254035 node start m02 --alsologtostderr -v 5                                                                                      │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:22 UTC │ 17 Oct 25 19:23 UTC │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │ 17 Oct 25 19:23 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5                                                                                   │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:23 UTC │                     │
	│ node    │ ha-254035 node list --alsologtostderr -v 5                                                                                           │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	│ node    │ ha-254035 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │                     │
	│ stop    │ ha-254035 stop --alsologtostderr -v 5                                                                                                │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:31 UTC │ 17 Oct 25 19:32 UTC │
	│ start   │ ha-254035 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:32 UTC │ 17 Oct 25 19:33 UTC │
	│ node    │ ha-254035 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-254035 │ jenkins │ v1.37.0 │ 17 Oct 25 19:34 UTC │ 17 Oct 25 19:35 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:32:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:32:05.731928  324968 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:32:05.732103  324968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:32:05.732132  324968 out.go:374] Setting ErrFile to fd 2...
	I1017 19:32:05.732151  324968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:32:05.732432  324968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:32:05.732853  324968 out.go:368] Setting JSON to false
	I1017 19:32:05.733704  324968 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8077,"bootTime":1760721449,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:32:05.733797  324968 start.go:141] virtualization:  
	I1017 19:32:05.736996  324968 out.go:179] * [ha-254035] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:32:05.740976  324968 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:32:05.741039  324968 notify.go:220] Checking for updates...
	I1017 19:32:05.746791  324968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:32:05.749627  324968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:05.752435  324968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:32:05.755486  324968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:32:05.758645  324968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:32:05.762073  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:05.762786  324968 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:32:05.783133  324968 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:32:05.783261  324968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:32:05.840860  324968 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:32:05.83134404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:32:05.840970  324968 docker.go:318] overlay module found
	I1017 19:32:05.844001  324968 out.go:179] * Using the docker driver based on existing profile
	I1017 19:32:05.846818  324968 start.go:305] selected driver: docker
	I1017 19:32:05.846835  324968 start.go:925] validating driver "docker" against &{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:05.846996  324968 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:32:05.847094  324968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:32:05.907256  324968 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 19:32:05.898245791 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:32:05.907667  324968 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:32:05.907704  324968 cni.go:84] Creating CNI manager for ""
	I1017 19:32:05.907768  324968 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:32:05.907825  324968 start.go:349] cluster config:
	{Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:05.911004  324968 out.go:179] * Starting "ha-254035" primary control-plane node in "ha-254035" cluster
	I1017 19:32:05.913729  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:05.916410  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:05.919155  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:05.919202  324968 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 19:32:05.919216  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:05.919268  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:05.919311  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:05.919321  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:05.919466  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:05.938132  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:05.938154  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:05.938173  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:05.938195  324968 start.go:360] acquireMachinesLock for ha-254035: {Name:mka2e39989b9cf6078778e7f6519885462ea711f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:05.938260  324968 start.go:364] duration metric: took 36.741µs to acquireMachinesLock for "ha-254035"
	I1017 19:32:05.938292  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:05.938311  324968 fix.go:54] fixHost starting: 
	I1017 19:32:05.938563  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:05.955500  324968 fix.go:112] recreateIfNeeded on ha-254035: state=Stopped err=<nil>
	W1017 19:32:05.955532  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:05.958901  324968 out.go:252] * Restarting existing docker container for "ha-254035" ...
	I1017 19:32:05.958986  324968 cli_runner.go:164] Run: docker start ha-254035
	I1017 19:32:06.223945  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:06.246991  324968 kic.go:430] container "ha-254035" state is running.
	I1017 19:32:06.247441  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:06.267236  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:06.267478  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:06.267538  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:06.286531  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:06.287650  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:06.287670  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:06.288401  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:09.440064  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:32:09.440099  324968 ubuntu.go:182] provisioning hostname "ha-254035"
	I1017 19:32:09.440162  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:09.457351  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:09.457659  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:09.457674  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035 && echo "ha-254035" | sudo tee /etc/hostname
	I1017 19:32:09.613626  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035
	
	I1017 19:32:09.613711  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:09.630718  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:09.631029  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:09.631045  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:09.780773  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:09.780802  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:09.780820  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:09.780831  324968 provision.go:84] configureAuth start
	I1017 19:32:09.780894  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:09.801074  324968 provision.go:143] copyHostCerts
	I1017 19:32:09.801116  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:09.801147  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:09.801165  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:09.801244  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:09.801333  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:09.801350  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:09.801354  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:09.801381  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:09.801427  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:09.801450  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:09.801455  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:09.801479  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:09.801528  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035 san=[127.0.0.1 192.168.49.2 ha-254035 localhost minikube]
	I1017 19:32:10.886077  324968 provision.go:177] copyRemoteCerts
	I1017 19:32:10.886156  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:32:10.886202  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:10.904681  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.010120  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:32:11.010211  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:32:11.028108  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:32:11.028165  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1017 19:32:11.044982  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:32:11.045040  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:32:11.061816  324968 provision.go:87] duration metric: took 1.280961553s to configureAuth
	I1017 19:32:11.061844  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:32:11.062085  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:11.062193  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.080891  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:11.081208  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33184 <nil> <nil>}
	I1017 19:32:11.081230  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:32:11.407184  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:32:11.407205  324968 machine.go:96] duration metric: took 5.139717317s to provisionDockerMachine
	I1017 19:32:11.407216  324968 start.go:293] postStartSetup for "ha-254035" (driver="docker")
	I1017 19:32:11.407226  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:32:11.407298  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:32:11.407335  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.427760  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.532299  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:32:11.535879  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:32:11.535910  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:32:11.535921  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:32:11.535995  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:32:11.536114  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:32:11.536128  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:32:11.536253  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:32:11.544245  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:11.561441  324968 start.go:296] duration metric: took 154.210245ms for postStartSetup
	I1017 19:32:11.561521  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:32:11.561565  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.578819  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.677440  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:32:11.681988  324968 fix.go:56] duration metric: took 5.74367054s for fixHost
	I1017 19:32:11.682016  324968 start.go:83] releasing machines lock for "ha-254035", held for 5.743742202s
	I1017 19:32:11.682098  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:32:11.699528  324968 ssh_runner.go:195] Run: cat /version.json
	I1017 19:32:11.699564  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:32:11.699581  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.699635  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:11.717585  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.718770  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:11.820235  324968 ssh_runner.go:195] Run: systemctl --version
	I1017 19:32:11.912550  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:32:11.950130  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:32:11.954364  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:32:11.954441  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:32:11.961885  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:32:11.961962  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:32:11.962000  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:32:11.962067  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:32:11.977362  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:32:11.990093  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:32:11.990161  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:32:12.005596  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:32:12.028034  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:32:12.152900  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:32:12.266767  324968 docker.go:234] disabling docker service ...
	I1017 19:32:12.266872  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:32:12.281703  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:32:12.294628  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:32:12.407632  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:32:12.520465  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:32:12.533571  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:32:12.547072  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:32:12.547164  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.555749  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:32:12.555816  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.564895  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.574036  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.582944  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:32:12.591372  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.600416  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.609166  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:12.618096  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:32:12.625617  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:32:12.633309  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:12.745158  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:32:12.879102  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:32:12.879171  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:32:12.883018  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:32:12.883079  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:32:12.886642  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:32:12.910860  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:32:12.910959  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:12.937450  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:12.969308  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:32:12.971996  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:32:12.987690  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:32:12.991595  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:13.001105  324968 kubeadm.go:883] updating cluster {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:32:13.001261  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:13.001318  324968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:32:13.038776  324968 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:32:13.038803  324968 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:32:13.038896  324968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:32:13.068706  324968 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:32:13.068731  324968 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:32:13.068740  324968 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:32:13.068844  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:32:13.068920  324968 ssh_runner.go:195] Run: crio config
	I1017 19:32:13.128454  324968 cni.go:84] Creating CNI manager for ""
	I1017 19:32:13.128483  324968 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 19:32:13.128514  324968 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:32:13.128575  324968 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-254035 NodeName:ha-254035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:32:13.128708  324968 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-254035"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:32:13.128729  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:32:13.128779  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:32:13.140710  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:13.140824  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:32:13.140891  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:32:13.148269  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:32:13.148357  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 19:32:13.156108  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 19:32:13.168572  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:32:13.181432  324968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 19:32:13.193977  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:32:13.207012  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:32:13.210795  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:13.220459  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:13.334243  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:13.350459  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.2
	I1017 19:32:13.350480  324968 certs.go:195] generating shared ca certs ...
	I1017 19:32:13.350496  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:13.350630  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:32:13.350673  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:32:13.350681  324968 certs.go:257] generating profile certs ...
	I1017 19:32:13.350760  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:32:13.350837  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.96820cea
	I1017 19:32:13.350876  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:32:13.350885  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:32:13.350898  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:32:13.350908  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:32:13.350918  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:32:13.350928  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:32:13.350941  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:32:13.350951  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:32:13.350962  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:32:13.351012  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:32:13.351041  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:32:13.351048  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:32:13.351070  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:32:13.351095  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:32:13.351117  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:32:13.351161  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:13.351191  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.351207  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.351219  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.351856  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:32:13.375776  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:32:13.394623  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:32:13.413878  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:32:13.434296  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:32:13.456687  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:32:13.484245  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:32:13.505393  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:32:13.528512  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:32:13.550651  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:32:13.581215  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:32:13.601377  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:32:13.617352  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:32:13.624146  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:32:13.633165  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.637212  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.637279  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:32:13.680086  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:32:13.689010  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:32:13.698044  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.701888  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.701957  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:13.744236  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:32:13.752213  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:32:13.760295  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.764256  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.764320  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:32:13.806422  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:32:13.814023  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:32:13.817664  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:32:13.858251  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:32:13.899329  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:32:13.940348  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:32:13.981700  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:32:14.022967  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:32:14.071872  324968 kubeadm.go:400] StartCluster: {Name:ha-254035 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:32:14.072073  324968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:32:14.072171  324968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:32:14.159623  324968 cri.go:89] found id: "0652fd27f5bff0f3d194b39abbb92602f049204bb45577d9a403537b5949c8cc"
	I1017 19:32:14.159695  324968 cri.go:89] found id: ""
	I1017 19:32:14.159788  324968 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 19:32:14.178262  324968 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:32:14Z" level=error msg="open /run/runc: no such file or directory"
	I1017 19:32:14.178424  324968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:32:14.193618  324968 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 19:32:14.193677  324968 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 19:32:14.193771  324968 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 19:32:14.214880  324968 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:14.215386  324968 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-254035" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:14.215555  324968 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "ha-254035" cluster setting kubeconfig missing "ha-254035" context setting]
	I1017 19:32:14.215920  324968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.216577  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:32:14.217294  324968 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 19:32:14.217346  324968 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 19:32:14.217362  324968 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 19:32:14.217367  324968 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 19:32:14.217427  324968 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 19:32:14.217452  324968 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 19:32:14.217940  324968 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 19:32:14.232358  324968 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 19:32:14.232432  324968 kubeadm.go:601] duration metric: took 38.716713ms to restartPrimaryControlPlane
	I1017 19:32:14.232455  324968 kubeadm.go:402] duration metric: took 160.594092ms to StartCluster
	I1017 19:32:14.232498  324968 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.232662  324968 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:32:14.233403  324968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:14.233677  324968 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:32:14.233733  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:32:14.233763  324968 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 19:32:14.234454  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:14.239733  324968 out.go:179] * Enabled addons: 
	I1017 19:32:14.243909  324968 addons.go:514] duration metric: took 10.136788ms for enable addons: enabled=[]
	I1017 19:32:14.243996  324968 start.go:246] waiting for cluster config update ...
	I1017 19:32:14.244021  324968 start.go:255] writing updated cluster config ...
	I1017 19:32:14.247787  324968 out.go:203] 
	I1017 19:32:14.251318  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:14.251508  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.254862  324968 out.go:179] * Starting "ha-254035-m02" control-plane node in "ha-254035" cluster
	I1017 19:32:14.258139  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:14.261425  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:14.264451  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:14.264576  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:14.264510  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:14.264972  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:14.265018  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:14.265234  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.286925  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:14.286943  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:14.286955  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:14.286977  324968 start.go:360] acquireMachinesLock for ha-254035-m02: {Name:mkcf59557cfb2c18712510006a9b88f53e9d8916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:14.287029  324968 start.go:364] duration metric: took 36.003µs to acquireMachinesLock for "ha-254035-m02"
	I1017 19:32:14.287048  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:14.287054  324968 fix.go:54] fixHost starting: m02
	I1017 19:32:14.287335  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:32:14.308380  324968 fix.go:112] recreateIfNeeded on ha-254035-m02: state=Stopped err=<nil>
	W1017 19:32:14.308406  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:14.312007  324968 out.go:252] * Restarting existing docker container for "ha-254035-m02" ...
	I1017 19:32:14.312096  324968 cli_runner.go:164] Run: docker start ha-254035-m02
	I1017 19:32:14.710881  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:32:14.738971  324968 kic.go:430] container "ha-254035-m02" state is running.
	I1017 19:32:14.739337  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:14.764764  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:14.765007  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:14.765074  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:14.794957  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:14.795271  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:14.795287  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:14.795888  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:17.992435  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:32:17.992457  324968 ubuntu.go:182] provisioning hostname "ha-254035-m02"
	I1017 19:32:17.992541  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:18.030394  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:18.030717  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:18.030730  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m02 && echo "ha-254035-m02" | sudo tee /etc/hostname
	I1017 19:32:18.238178  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m02
	
	I1017 19:32:18.238358  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:18.269009  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:18.269312  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:18.269330  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:18.453189  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:18.453217  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:18.453238  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:18.453248  324968 provision.go:84] configureAuth start
	I1017 19:32:18.453312  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:18.494134  324968 provision.go:143] copyHostCerts
	I1017 19:32:18.494179  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:18.494213  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:18.494225  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:18.494315  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:18.494442  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:18.494469  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:18.494479  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:18.494510  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:18.494560  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:18.494584  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:18.494592  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:18.494620  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:18.494675  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m02 san=[127.0.0.1 192.168.49.3 ha-254035-m02 localhost minikube]
	I1017 19:32:19.339690  324968 provision.go:177] copyRemoteCerts
	I1017 19:32:19.339761  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:32:19.339805  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:19.360710  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:19.488967  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:32:19.489032  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:32:19.531594  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:32:19.531655  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:32:19.572626  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:32:19.572693  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:32:19.617410  324968 provision.go:87] duration metric: took 1.16414737s to configureAuth
	I1017 19:32:19.617479  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:32:19.617739  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:19.617872  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:19.658286  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:19.658598  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33189 <nil> <nil>}
	I1017 19:32:19.658613  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:32:20.717397  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:32:20.717469  324968 machine.go:96] duration metric: took 5.952443469s to provisionDockerMachine
	I1017 19:32:20.717493  324968 start.go:293] postStartSetup for "ha-254035-m02" (driver="docker")
	I1017 19:32:20.717527  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:32:20.717636  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:32:20.717717  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:20.738048  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:20.853074  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:32:20.857246  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:32:20.857278  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:32:20.857289  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:32:20.857346  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:32:20.857423  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:32:20.857437  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:32:20.857537  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:32:20.866006  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:20.886225  324968 start.go:296] duration metric: took 168.70092ms for postStartSetup
	I1017 19:32:20.886334  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:32:20.886398  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:20.912756  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.034286  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:32:21.042383  324968 fix.go:56] duration metric: took 6.755322442s for fixHost
	I1017 19:32:21.042417  324968 start.go:83] releasing machines lock for "ha-254035-m02", held for 6.755380378s
	I1017 19:32:21.042509  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m02
	I1017 19:32:21.067009  324968 out.go:179] * Found network options:
	I1017 19:32:21.069796  324968 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 19:32:21.072617  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:32:21.072667  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:32:21.072737  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:32:21.072783  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:21.072798  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:32:21.072853  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m02
	I1017 19:32:21.106980  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.116734  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33189 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m02/id_rsa Username:docker}
	I1017 19:32:21.321123  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:32:21.398151  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:32:21.398260  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:32:21.429985  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:32:21.430019  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:32:21.430052  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:32:21.430120  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:32:21.469545  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:32:21.499838  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:32:21.499915  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:32:21.546298  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:32:21.574508  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:32:22.043397  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:32:22.346332  324968 docker.go:234] disabling docker service ...
	I1017 19:32:22.346414  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:32:22.366415  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:32:22.385363  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:32:22.610088  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:32:22.882540  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:32:22.898584  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:32:22.925839  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:32:22.925982  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.941214  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:32:22.941380  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.952790  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.964392  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.976274  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:32:22.986631  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:22.999122  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:23.017402  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:32:23.031048  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:32:23.041313  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:32:23.054658  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:23.287821  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:32:23.539139  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:32:23.539262  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:32:23.543731  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:32:23.543842  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:32:23.550732  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:32:23.592317  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:32:23.592405  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:23.642337  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:32:23.710060  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:32:23.713120  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:32:23.716299  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:32:23.744818  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:32:23.750008  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:23.771597  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:32:23.771839  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:23.772139  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:32:23.805838  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:32:23.806449  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.3
	I1017 19:32:23.806468  324968 certs.go:195] generating shared ca certs ...
	I1017 19:32:23.806508  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:32:23.809795  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:32:23.809866  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:32:23.809883  324968 certs.go:257] generating profile certs ...
	I1017 19:32:23.809976  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:32:23.810032  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.5a836dc6
	I1017 19:32:23.810076  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:32:23.810089  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:32:23.810105  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:32:23.810121  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:32:23.810138  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:32:23.810155  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:32:23.810173  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:32:23.810185  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:32:23.810197  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:32:23.810249  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:32:23.810281  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:32:23.810294  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:32:23.810326  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:32:23.810354  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:32:23.810380  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:32:23.810425  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:32:23.810467  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:23.810484  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:32:23.810495  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:32:23.810560  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:32:23.830858  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:32:23.928800  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:32:23.933176  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:32:23.948803  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:32:23.953564  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:32:23.963833  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:32:23.970797  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:32:23.980707  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:32:23.985094  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:32:23.994719  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:32:23.998983  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:32:24.010610  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:32:24.015549  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:32:24.026675  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:32:24.046169  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:32:24.065010  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:32:24.083555  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:32:24.101835  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:32:24.121645  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:32:24.140364  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:32:24.158250  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:32:24.175078  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:32:24.192107  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:32:24.210093  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:32:24.227779  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:32:24.240287  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:32:24.253704  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:32:24.268887  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:32:24.281554  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:32:24.294030  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:32:24.307056  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:32:24.319713  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:32:24.326454  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:32:24.334896  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.338984  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.339069  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:32:24.382244  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:32:24.389973  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:32:24.397963  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.402178  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.402260  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:32:24.445450  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:32:24.454057  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:32:24.462416  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.469188  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.469265  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:32:24.513771  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:32:24.526391  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:32:24.532093  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:32:24.577438  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:32:24.619730  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:32:24.661938  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:32:24.706695  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:32:24.750711  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:32:24.792693  324968 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 19:32:24.792815  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:32:24.792847  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:32:24.792907  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:32:24.805902  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:32:24.805963  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:32:24.806034  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:32:24.815558  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:32:24.815637  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:32:24.823591  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:32:24.837169  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:32:24.849790  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:32:24.870243  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:32:24.879498  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:32:24.891396  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:25.079299  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:25.098478  324968 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:32:25.098820  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:25.104996  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:32:25.107746  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:32:25.272984  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:32:25.289585  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:32:25.289670  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:32:25.289939  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m02" to be "Ready" ...
	W1017 19:32:45.698726  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:47.846677  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:50.300191  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	W1017 19:32:52.794234  324968 node_ready.go:57] node "ha-254035-m02" has "Ready":"Unknown" status (will retry)
	I1017 19:32:55.298996  324968 node_ready.go:49] node "ha-254035-m02" is "Ready"
	I1017 19:32:55.299027  324968 node_ready.go:38] duration metric: took 30.009056285s for node "ha-254035-m02" to be "Ready" ...
	I1017 19:32:55.299042  324968 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:32:55.299101  324968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:32:55.311396  324968 api_server.go:72] duration metric: took 30.212852853s to wait for apiserver process to appear ...
	I1017 19:32:55.311421  324968 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:32:55.311440  324968 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:32:55.321736  324968 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:32:55.323225  324968 api_server.go:141] control plane version: v1.34.1
	I1017 19:32:55.323289  324968 api_server.go:131] duration metric: took 11.860591ms to wait for apiserver health ...
	I1017 19:32:55.323326  324968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:32:55.332734  324968 system_pods.go:59] 26 kube-system pods found
	I1017 19:32:55.332788  324968 system_pods.go:61] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.332797  324968 system_pods.go:61] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.332809  324968 system_pods.go:61] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:32:55.332819  324968 system_pods.go:61] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:32:55.332827  324968 system_pods.go:61] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:32:55.332832  324968 system_pods.go:61] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:32:55.332845  324968 system_pods.go:61] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.332850  324968 system_pods.go:61] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:32:55.332863  324968 system_pods.go:61] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.332872  324968 system_pods.go:61] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:32:55.332881  324968 system_pods.go:61] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:32:55.332886  324968 system_pods.go:61] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:32:55.332893  324968 system_pods.go:61] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.332905  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.332913  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:32:55.332921  324968 system_pods.go:61] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.332931  324968 system_pods.go:61] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.332936  324968 system_pods.go:61] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:32:55.332941  324968 system_pods.go:61] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:32:55.332953  324968 system_pods.go:61] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.332964  324968 system_pods.go:61] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.332973  324968 system_pods.go:61] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:32:55.332981  324968 system_pods.go:61] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:32:55.332985  324968 system_pods.go:61] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:32:55.332989  324968 system_pods.go:61] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:32:55.333000  324968 system_pods.go:61] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:32:55.333009  324968 system_pods.go:74] duration metric: took 9.659246ms to wait for pod list to return data ...
	I1017 19:32:55.333022  324968 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:32:55.344111  324968 default_sa.go:45] found service account: "default"
	I1017 19:32:55.344138  324968 default_sa.go:55] duration metric: took 11.10916ms for default service account to be created ...
	I1017 19:32:55.344149  324968 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:32:55.351885  324968 system_pods.go:86] 26 kube-system pods found
	I1017 19:32:55.351922  324968 system_pods.go:89] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.351933  324968 system_pods.go:89] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:32:55.351940  324968 system_pods.go:89] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:32:55.351947  324968 system_pods.go:89] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 19:32:55.351952  324968 system_pods.go:89] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:32:55.351957  324968 system_pods.go:89] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:32:55.351966  324968 system_pods.go:89] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.351971  324968 system_pods.go:89] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:32:55.351986  324968 system_pods.go:89] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 19:32:55.351997  324968 system_pods.go:89] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 19:32:55.352003  324968 system_pods.go:89] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:32:55.352010  324968 system_pods.go:89] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:32:55.352019  324968 system_pods.go:89] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.352031  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 19:32:55.352036  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:32:55.352043  324968 system_pods.go:89] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.352051  324968 system_pods.go:89] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 19:32:55.352056  324968 system_pods.go:89] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:32:55.352062  324968 system_pods.go:89] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:32:55.352068  324968 system_pods.go:89] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.352086  324968 system_pods.go:89] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 19:32:55.352091  324968 system_pods.go:89] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:32:55.352096  324968 system_pods.go:89] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:32:55.352100  324968 system_pods.go:89] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:32:55.352108  324968 system_pods.go:89] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:32:55.352116  324968 system_pods.go:89] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:32:55.352123  324968 system_pods.go:126] duration metric: took 7.969634ms to wait for k8s-apps to be running ...
	I1017 19:32:55.352135  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:32:55.352192  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:32:55.367145  324968 system_svc.go:56] duration metric: took 14.999806ms WaitForService to wait for kubelet
	I1017 19:32:55.367171  324968 kubeadm.go:586] duration metric: took 30.268632021s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:32:55.367192  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:32:55.370727  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370762  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370773  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370778  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370782  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370786  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370790  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:32:55.370793  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:32:55.370798  324968 node_conditions.go:105] duration metric: took 3.600536ms to run NodePressure ...
	I1017 19:32:55.370811  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:32:55.370845  324968 start.go:255] writing updated cluster config ...
	I1017 19:32:55.374424  324968 out.go:203] 
	I1017 19:32:55.377636  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:32:55.377758  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.381262  324968 out.go:179] * Starting "ha-254035-m03" control-plane node in "ha-254035" cluster
	I1017 19:32:55.385137  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:32:55.388169  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:32:55.391014  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:32:55.391065  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:32:55.391130  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:32:55.391213  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:32:55.391250  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:32:55.391408  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.410277  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:32:55.410300  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:32:55.410323  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:32:55.410347  324968 start.go:360] acquireMachinesLock for ha-254035-m03: {Name:mked9f1e3aab9db3df3b59f9799fd7eb1b9dc756 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:32:55.410421  324968 start.go:364] duration metric: took 54.473µs to acquireMachinesLock for "ha-254035-m03"
	I1017 19:32:55.410445  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:32:55.410454  324968 fix.go:54] fixHost starting: m03
	I1017 19:32:55.410732  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:32:55.427703  324968 fix.go:112] recreateIfNeeded on ha-254035-m03: state=Stopped err=<nil>
	W1017 19:32:55.427730  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:32:55.431363  324968 out.go:252] * Restarting existing docker container for "ha-254035-m03" ...
	I1017 19:32:55.431457  324968 cli_runner.go:164] Run: docker start ha-254035-m03
	I1017 19:32:55.755807  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:32:55.777127  324968 kic.go:430] container "ha-254035-m03" state is running.
	I1017 19:32:55.777489  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:32:55.800244  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:32:55.800494  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:32:55.800582  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:55.829783  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:55.830097  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:55.830107  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:32:55.830700  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:32:59.026446  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m03
	
	I1017 19:32:59.026469  324968 ubuntu.go:182] provisioning hostname "ha-254035-m03"
	I1017 19:32:59.026531  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:59.057027  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:59.057341  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:59.057359  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m03 && echo "ha-254035-m03" | sudo tee /etc/hostname
	I1017 19:32:59.282090  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m03
	
	I1017 19:32:59.282168  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:32:59.325073  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:32:59.325398  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:32:59.325420  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:32:59.509111  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:32:59.509181  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:32:59.509265  324968 ubuntu.go:190] setting up certificates
	I1017 19:32:59.509297  324968 provision.go:84] configureAuth start
	I1017 19:32:59.509400  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:32:59.548783  324968 provision.go:143] copyHostCerts
	I1017 19:32:59.548834  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:59.548871  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:32:59.548878  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:32:59.548957  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:32:59.549040  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:59.549072  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:32:59.549078  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:32:59.549106  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:32:59.549151  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:59.549168  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:32:59.549172  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:32:59.549195  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:32:59.549242  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m03 san=[127.0.0.1 192.168.49.4 ha-254035-m03 localhost minikube]
	I1017 19:33:00.043691  324968 provision.go:177] copyRemoteCerts
	I1017 19:33:00.043871  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:33:00.043944  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.064471  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:00.223369  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:33:00.223446  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:33:00.260611  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:33:00.260683  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:33:00.317143  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:33:00.317306  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:33:00.385743  324968 provision.go:87] duration metric: took 876.417393ms to configureAuth
	I1017 19:33:00.385819  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:33:00.386115  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:00.386276  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.432179  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:00.432495  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33194 <nil> <nil>}
	I1017 19:33:00.432512  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:33:00.901503  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:33:00.901591  324968 machine.go:96] duration metric: took 5.101084009s to provisionDockerMachine
	I1017 19:33:00.901618  324968 start.go:293] postStartSetup for "ha-254035-m03" (driver="docker")
	I1017 19:33:00.901662  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:33:00.901753  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:33:00.901835  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:00.927269  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.051646  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:33:01.055666  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:33:01.055692  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:33:01.055704  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:33:01.055763  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:33:01.055854  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:33:01.055866  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:33:01.055965  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:33:01.066853  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:01.101261  324968 start.go:296] duration metric: took 199.597831ms for postStartSetup
	I1017 19:33:01.101355  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:33:01.101408  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.130630  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.323449  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:33:01.379781  324968 fix.go:56] duration metric: took 5.969318931s for fixHost
	I1017 19:33:01.379809  324968 start.go:83] releasing machines lock for "ha-254035-m03", held for 5.969375603s
	I1017 19:33:01.379881  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:33:01.416934  324968 out.go:179] * Found network options:
	I1017 19:33:01.419424  324968 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1017 19:33:01.422873  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422914  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422951  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:01.422967  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:33:01.423035  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:33:01.423092  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.423496  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:33:01.423560  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:33:01.460787  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.468755  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33194 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:33:01.901807  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:33:02.054376  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:33:02.054456  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:33:02.063698  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:33:02.063723  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:33:02.063757  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:33:02.063816  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:33:02.083121  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:33:02.099886  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:33:02.099962  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:33:02.129631  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:33:02.146247  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:33:02.487383  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:33:02.778663  324968 docker.go:234] disabling docker service ...
	I1017 19:33:02.778765  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:33:02.797150  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:33:02.816103  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:33:03.072265  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:33:03.311051  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:33:03.337034  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:33:03.367080  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:33:03.367228  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.379211  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:33:03.379292  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.403390  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.417512  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.434353  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:33:03.450504  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.465403  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.497155  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:03.516048  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:33:03.527113  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:33:03.546234  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:03.821017  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:33:05.091469  324968 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.270414549s)
	I1017 19:33:05.091496  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:33:05.091552  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:33:05.096822  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:33:05.096899  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:33:05.102601  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:33:05.133868  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:33:05.133956  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:05.169578  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:05.203999  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:33:05.206796  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:33:05.209777  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 19:33:05.212751  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:33:05.237841  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:33:05.242830  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:05.255230  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:33:05.255472  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:05.255718  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:33:05.273658  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:33:05.273934  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.4
	I1017 19:33:05.273942  324968 certs.go:195] generating shared ca certs ...
	I1017 19:33:05.273956  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:33:05.274063  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:33:05.274105  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:33:05.274111  324968 certs.go:257] generating profile certs ...
	I1017 19:33:05.274183  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key
	I1017 19:33:05.274262  324968 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key.db0a5916
	I1017 19:33:05.274301  324968 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key
	I1017 19:33:05.274310  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:33:05.274333  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:33:05.274345  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:33:05.274357  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:33:05.274367  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 19:33:05.274379  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 19:33:05.274397  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 19:33:05.274409  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 19:33:05.274457  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:33:05.274485  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:33:05.274493  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:33:05.274518  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:33:05.274539  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:33:05.274559  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:33:05.274597  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:05.274622  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.274637  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.274648  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.274703  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:33:05.302509  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33184 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:33:05.404899  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 19:33:05.408751  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 19:33:05.417079  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 19:33:05.420443  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 19:33:05.429786  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 19:33:05.433515  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 19:33:05.442432  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 19:33:05.446029  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1017 19:33:05.456258  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 19:33:05.460045  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 19:33:05.468819  324968 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 19:33:05.473279  324968 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 19:33:05.482460  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:33:05.502746  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:33:05.521060  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:33:05.540206  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:33:05.559261  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 19:33:05.579914  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 19:33:05.607376  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:33:05.624208  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:33:05.643462  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:33:05.663238  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:33:05.685107  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:33:05.703927  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 19:33:05.716945  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 19:33:05.730309  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 19:33:05.744332  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1017 19:33:05.760823  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 19:33:05.781849  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 19:33:05.797383  324968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 19:33:05.815449  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:33:05.822374  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:33:05.830919  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.835675  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.835801  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:05.879325  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:33:05.888083  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:33:05.896261  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.900178  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.900239  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:33:05.943707  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:33:05.952618  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:33:05.961373  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.964981  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:33:05.965094  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:33:06.008396  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:33:06.017978  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:33:06.022220  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 19:33:06.064442  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 19:33:06.106411  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 19:33:06.147611  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 19:33:06.191689  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 19:33:06.235810  324968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 19:33:06.278610  324968 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1017 19:33:06.278711  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:33:06.278740  324968 kube-vip.go:115] generating kube-vip config ...
	I1017 19:33:06.278801  324968 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 19:33:06.292033  324968 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:33:06.292094  324968 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 19:33:06.292151  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:33:06.300562  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:33:06.300652  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 19:33:06.314364  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:33:06.329602  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:33:06.360017  324968 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 19:33:06.379948  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:33:06.383943  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:06.395455  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:06.558780  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:06.573849  324968 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:33:06.574138  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:06.579819  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:33:06.582763  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:06.726699  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:06.743509  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:33:06.743622  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:33:06.743944  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m03" to be "Ready" ...
	W1017 19:33:08.748353  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:11.248113  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:13.747938  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:16.248008  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:18.248671  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:20.249311  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:22.747279  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:24.747653  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	W1017 19:33:26.749385  324968 node_ready.go:57] node "ha-254035-m03" has "Ready":"Unknown" status (will retry)
	I1017 19:33:27.747523  324968 node_ready.go:49] node "ha-254035-m03" is "Ready"
	I1017 19:33:27.747558  324968 node_ready.go:38] duration metric: took 21.003579566s for node "ha-254035-m03" to be "Ready" ...
	I1017 19:33:27.747571  324968 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:33:27.747631  324968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:33:27.766700  324968 api_server.go:72] duration metric: took 21.192473888s to wait for apiserver process to appear ...
	I1017 19:33:27.766729  324968 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:33:27.766753  324968 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:33:27.775571  324968 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:33:27.776498  324968 api_server.go:141] control plane version: v1.34.1
	I1017 19:33:27.776585  324968 api_server.go:131] duration metric: took 9.846294ms to wait for apiserver health ...
	I1017 19:33:27.776595  324968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:33:27.783374  324968 system_pods.go:59] 26 kube-system pods found
	I1017 19:33:27.783414  324968 system_pods.go:61] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running
	I1017 19:33:27.783426  324968 system_pods.go:61] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:33:27.783431  324968 system_pods.go:61] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:33:27.783438  324968 system_pods.go:61] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running
	I1017 19:33:27.783442  324968 system_pods.go:61] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:33:27.783446  324968 system_pods.go:61] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:33:27.783450  324968 system_pods.go:61] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running
	I1017 19:33:27.783455  324968 system_pods.go:61] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:33:27.783464  324968 system_pods.go:61] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running
	I1017 19:33:27.783469  324968 system_pods.go:61] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running
	I1017 19:33:27.783480  324968 system_pods.go:61] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:33:27.783484  324968 system_pods.go:61] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:33:27.783489  324968 system_pods.go:61] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running
	I1017 19:33:27.783500  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running
	I1017 19:33:27.783505  324968 system_pods.go:61] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:33:27.783509  324968 system_pods.go:61] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running
	I1017 19:33:27.783519  324968 system_pods.go:61] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running
	I1017 19:33:27.783524  324968 system_pods.go:61] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:33:27.783528  324968 system_pods.go:61] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:33:27.783532  324968 system_pods.go:61] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running
	I1017 19:33:27.783536  324968 system_pods.go:61] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running
	I1017 19:33:27.783541  324968 system_pods.go:61] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:33:27.783545  324968 system_pods.go:61] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:33:27.783552  324968 system_pods.go:61] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:33:27.783556  324968 system_pods.go:61] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:33:27.783564  324968 system_pods.go:61] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running
	I1017 19:33:27.783569  324968 system_pods.go:74] duration metric: took 6.965509ms to wait for pod list to return data ...
	I1017 19:33:27.783582  324968 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:33:27.788939  324968 default_sa.go:45] found service account: "default"
	I1017 19:33:27.788978  324968 default_sa.go:55] duration metric: took 5.380156ms for default service account to be created ...
	I1017 19:33:27.788989  324968 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:33:27.884397  324968 system_pods.go:86] 26 kube-system pods found
	I1017 19:33:27.884440  324968 system_pods.go:89] "coredns-66bc5c9577-gfklr" [8bf2b43b-91c9-4531-a571-36060412860e] Running
	I1017 19:33:27.884450  324968 system_pods.go:89] "coredns-66bc5c9577-wbgc8" [8e82e918-326c-4295-82ea-e35a31f64287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:33:27.884456  324968 system_pods.go:89] "etcd-ha-254035" [b4680f45-2e5c-49cd-8f12-76cd58e8a039] Running
	I1017 19:33:27.884462  324968 system_pods.go:89] "etcd-ha-254035-m02" [fd83b82f-417f-4a8d-b6f2-82d1a3ea4233] Running
	I1017 19:33:27.884466  324968 system_pods.go:89] "etcd-ha-254035-m03" [98b26c2c-cb88-4ade-80f5-45b9d2b82e8f] Running
	I1017 19:33:27.884475  324968 system_pods.go:89] "kindnet-2k9kj" [79d0c5f8-da5a-4d9e-b627-6746685bb4ec] Running
	I1017 19:33:27.884478  324968 system_pods.go:89] "kindnet-gzzsg" [9d09bb8e-ddb5-4533-9215-83fefb05a7eb] Running
	I1017 19:33:27.884482  324968 system_pods.go:89] "kindnet-pwhwv" [45fe6d6c-f02a-45fd-807f-68edc98a1964] Running
	I1017 19:33:27.884494  324968 system_pods.go:89] "kindnet-vss98" [a6f8b1bf-7a57-4b08-ba72-5c79fe8d1cbe] Running
	I1017 19:33:27.884505  324968 system_pods.go:89] "kube-apiserver-ha-254035" [d7b4adda-06ab-4426-9829-87c607195341] Running
	I1017 19:33:27.884525  324968 system_pods.go:89] "kube-apiserver-ha-254035-m02" [9099db15-8600-470e-94c3-ca2a5eeea1ff] Running
	I1017 19:33:27.884531  324968 system_pods.go:89] "kube-apiserver-ha-254035-m03" [eb9a2a88-a691-4422-bb82-e0c198d601eb] Running
	I1017 19:33:27.884535  324968 system_pods.go:89] "kube-controller-manager-ha-254035" [9c5287e1-d9d8-4020-b6ec-b1059fff6764] Running
	I1017 19:33:27.884540  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m02" [54702c01-b38e-4b5e-b7ea-e5af903630c0] Running
	I1017 19:33:27.884545  324968 system_pods.go:89] "kube-controller-manager-ha-254035-m03" [2bfb9df5-b257-45ec-be05-e930f56e3c7c] Running
	I1017 19:33:27.884559  324968 system_pods.go:89] "kube-proxy-548b2" [4b772887-90df-4871-9343-69349bdda859] Running
	I1017 19:33:27.884563  324968 system_pods.go:89] "kube-proxy-b4fr6" [a7ace6b8-0068-4c44-b8d9-8d66b10fa286] Running
	I1017 19:33:27.884567  324968 system_pods.go:89] "kube-proxy-fr5ts" [5c43f8a5-c3e0-4893-9ab0-c99f69a43434] Running
	I1017 19:33:27.884571  324968 system_pods.go:89] "kube-proxy-k56cv" [32bc352e-19aa-4bcf-8c5f-bb6ffa1b2f4d] Running
	I1017 19:33:27.884602  324968 system_pods.go:89] "kube-scheduler-ha-254035" [2f888dff-efbc-410b-9e14-93754573f2f6] Running
	I1017 19:33:27.884606  324968 system_pods.go:89] "kube-scheduler-ha-254035-m02" [dcaa8956-7720-467c-86d5-c0296adc07dc] Running
	I1017 19:33:27.884610  324968 system_pods.go:89] "kube-scheduler-ha-254035-m03" [00e19215-9094-448d-b734-227230b1c474] Running
	I1017 19:33:27.884614  324968 system_pods.go:89] "kube-vip-ha-254035" [777cc428-db79-4dee-abea-a428f4fabb67] Running
	I1017 19:33:27.884618  324968 system_pods.go:89] "kube-vip-ha-254035-m02" [3a49ae9c-fc6c-4ed7-9162-7ebc56124917] Running
	I1017 19:33:27.884622  324968 system_pods.go:89] "kube-vip-ha-254035-m03" [fa0f29b9-585d-4e28-9e32-7d493f0010dd] Running
	I1017 19:33:27.884630  324968 system_pods.go:89] "storage-provisioner" [4784cc20-6df7-4e32-bbfa-e0b3be4a1e83] Running
	I1017 19:33:27.884636  324968 system_pods.go:126] duration metric: took 95.641254ms to wait for k8s-apps to be running ...
	I1017 19:33:27.884659  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:33:27.884730  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:33:27.903571  324968 system_svc.go:56] duration metric: took 18.903653ms WaitForService to wait for kubelet
	I1017 19:33:27.903609  324968 kubeadm.go:586] duration metric: took 21.32938831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:33:27.903634  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:33:27.907627  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907667  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907680  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907685  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907689  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907694  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907697  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:27.907701  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:27.907706  324968 node_conditions.go:105] duration metric: took 4.066189ms to run NodePressure ...
	I1017 19:33:27.907719  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:33:27.907751  324968 start.go:255] writing updated cluster config ...
	I1017 19:33:27.911402  324968 out.go:203] 
	I1017 19:33:27.915521  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:27.915649  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:27.918913  324968 out.go:179] * Starting "ha-254035-m04" worker node in "ha-254035" cluster
	I1017 19:33:27.921713  324968 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:33:27.924620  324968 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:33:27.927532  324968 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:33:27.927564  324968 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:33:27.927567  324968 cache.go:58] Caching tarball of preloaded images
	I1017 19:33:27.927721  324968 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:33:27.927731  324968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:33:27.927887  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:27.960833  324968 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 19:33:27.960852  324968 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 19:33:27.960865  324968 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:33:27.960889  324968 start.go:360] acquireMachinesLock for ha-254035-m04: {Name:mk584e2cd96462cdaa6d1f2088a137ff40c48733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:33:27.960940  324968 start.go:364] duration metric: took 36.438µs to acquireMachinesLock for "ha-254035-m04"
	I1017 19:33:27.960959  324968 start.go:96] Skipping create...Using existing machine configuration
	I1017 19:33:27.960964  324968 fix.go:54] fixHost starting: m04
	I1017 19:33:27.961255  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:33:27.995390  324968 fix.go:112] recreateIfNeeded on ha-254035-m04: state=Stopped err=<nil>
	W1017 19:33:27.995487  324968 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 19:33:27.999207  324968 out.go:252] * Restarting existing docker container for "ha-254035-m04" ...
	I1017 19:33:27.999295  324968 cli_runner.go:164] Run: docker start ha-254035-m04
	I1017 19:33:28.394503  324968 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:33:28.421995  324968 kic.go:430] container "ha-254035-m04" state is running.
	I1017 19:33:28.422449  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:28.441865  324968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/config.json ...
	I1017 19:33:28.442116  324968 machine.go:93] provisionDockerMachine start ...
	I1017 19:33:28.442199  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:28.474872  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:28.475264  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:28.475277  324968 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:33:28.476011  324968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 19:33:31.633234  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m04
	
	I1017 19:33:31.633323  324968 ubuntu.go:182] provisioning hostname "ha-254035-m04"
	I1017 19:33:31.633415  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:31.653177  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:31.653483  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:31.653500  324968 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-254035-m04 && echo "ha-254035-m04" | sudo tee /etc/hostname
	I1017 19:33:31.837574  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-254035-m04
	
	I1017 19:33:31.837648  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:31.855639  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:31.855942  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:31.855960  324968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-254035-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-254035-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-254035-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:33:32.021671  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:33:32.021700  324968 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 19:33:32.021717  324968 ubuntu.go:190] setting up certificates
	I1017 19:33:32.021728  324968 provision.go:84] configureAuth start
	I1017 19:33:32.021791  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:32.058708  324968 provision.go:143] copyHostCerts
	I1017 19:33:32.058751  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:33:32.058799  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 19:33:32.058807  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 19:33:32.058887  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 19:33:32.058963  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:33:32.058981  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 19:33:32.058986  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 19:33:32.059011  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 19:33:32.059054  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:33:32.059070  324968 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 19:33:32.059074  324968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 19:33:32.059096  324968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 19:33:32.059142  324968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.ha-254035-m04 san=[127.0.0.1 192.168.49.5 ha-254035-m04 localhost minikube]
	I1017 19:33:32.315144  324968 provision.go:177] copyRemoteCerts
	I1017 19:33:32.315269  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:33:32.315346  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.336727  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:32.451884  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 19:33:32.451953  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:33:32.477259  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 19:33:32.477335  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:33:32.496861  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 19:33:32.496932  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:33:32.517190  324968 provision.go:87] duration metric: took 495.446144ms to configureAuth
	I1017 19:33:32.517214  324968 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:33:32.517497  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:32.517606  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.538066  324968 main.go:141] libmachine: Using SSH client type: native
	I1017 19:33:32.538377  324968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33199 <nil> <nil>}
	I1017 19:33:32.538397  324968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:33:32.868308  324968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:33:32.868331  324968 machine.go:96] duration metric: took 4.426196148s to provisionDockerMachine
	I1017 19:33:32.868343  324968 start.go:293] postStartSetup for "ha-254035-m04" (driver="docker")
	I1017 19:33:32.868353  324968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:33:32.868430  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:33:32.868488  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:32.888400  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.003003  324968 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:33:33.008119  324968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:33:33.008155  324968 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:33:33.008169  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 19:33:33.008242  324968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 19:33:33.008327  324968 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 19:33:33.008339  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /etc/ssl/certs/2595962.pem
	I1017 19:33:33.008446  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 19:33:33.018512  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:33.048826  324968 start.go:296] duration metric: took 180.468283ms for postStartSetup
	I1017 19:33:33.048927  324968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:33:33.048979  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.068864  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.183386  324968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:33:33.188620  324968 fix.go:56] duration metric: took 5.227645919s for fixHost
	I1017 19:33:33.188649  324968 start.go:83] releasing machines lock for "ha-254035-m04", held for 5.227700884s
	I1017 19:33:33.188718  324968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:33:33.212152  324968 out.go:179] * Found network options:
	I1017 19:33:33.215093  324968 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1017 19:33:33.217835  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217871  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217882  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217906  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217916  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 19:33:33.217926  324968 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 19:33:33.217995  324968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:33:33.218040  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.218316  324968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:33:33.218377  324968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:33:33.247548  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.256825  324968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:33:33.415645  324968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:33:33.492514  324968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:33:33.492637  324968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:33:33.500683  324968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 19:33:33.500716  324968 start.go:495] detecting cgroup driver to use...
	I1017 19:33:33.500752  324968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:33:33.500801  324968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:33:33.517445  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:33:33.537937  324968 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:33:33.538053  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:33:33.556447  324968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:33:33.576435  324968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:33:33.721164  324968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:33:33.856018  324968 docker.go:234] disabling docker service ...
	I1017 19:33:33.856163  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:33:33.874251  324968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:33:33.889153  324968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:33:34.059244  324968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:33:34.205588  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:33:34.223596  324968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:33:34.248335  324968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:33:34.248449  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.259664  324968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:33:34.259750  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.274225  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.284260  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.293374  324968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:33:34.301939  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.313190  324968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.322270  324968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:33:34.335994  324968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:33:34.345500  324968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:33:34.355597  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:34.485902  324968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:33:34.658593  324968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:33:34.658711  324968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:33:34.663315  324968 start.go:563] Will wait 60s for crictl version
	I1017 19:33:34.663396  324968 ssh_runner.go:195] Run: which crictl
	I1017 19:33:34.667245  324968 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:33:34.704265  324968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:33:34.704411  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:34.738612  324968 ssh_runner.go:195] Run: crio --version
	I1017 19:33:34.775046  324968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:33:34.777914  324968 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 19:33:34.780845  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 19:33:34.783723  324968 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1017 19:33:34.786627  324968 cli_runner.go:164] Run: docker network inspect ha-254035 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:33:34.808635  324968 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:33:34.815185  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:34.827225  324968 mustload.go:65] Loading cluster: ha-254035
	I1017 19:33:34.827480  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:34.827743  324968 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:33:34.847031  324968 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:33:34.847380  324968 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035 for IP: 192.168.49.5
	I1017 19:33:34.847390  324968 certs.go:195] generating shared ca certs ...
	I1017 19:33:34.847415  324968 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:33:34.847641  324968 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 19:33:34.847708  324968 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 19:33:34.847720  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 19:33:34.847749  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 19:33:34.847765  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 19:33:34.847775  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 19:33:34.847869  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 19:33:34.847922  324968 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 19:33:34.847932  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:33:34.847959  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:33:34.847999  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:33:34.848045  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 19:33:34.848123  324968 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 19:33:34.848155  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem -> /usr/share/ca-certificates/259596.pem
	I1017 19:33:34.848175  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> /usr/share/ca-certificates/2595962.pem
	I1017 19:33:34.848187  324968 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:34.848206  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:33:34.868384  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 19:33:34.889303  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:33:34.915103  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 19:33:34.947695  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 19:33:34.970689  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 19:33:34.991429  324968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:33:35.015821  324968 ssh_runner.go:195] Run: openssl version
	I1017 19:33:35.023417  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 19:33:35.033117  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.038047  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.038163  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 19:33:35.080117  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 19:33:35.088886  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:33:35.098283  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.103083  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.103169  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:33:35.146427  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:33:35.160483  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 19:33:35.172663  324968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.177994  324968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.178116  324968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 19:33:35.221220  324968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 19:33:35.236438  324968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:33:35.243682  324968 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:33:35.243736  324968 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1017 19:33:35.243840  324968 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-254035-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-254035 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:33:35.243919  324968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:33:35.253526  324968 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:33:35.253625  324968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1017 19:33:35.262623  324968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:33:35.276015  324968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:33:35.290622  324968 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 19:33:35.294428  324968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:33:35.304725  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:35.455305  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:35.471222  324968 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1017 19:33:35.471611  324968 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:33:35.476720  324968 out.go:179] * Verifying Kubernetes components...
	I1017 19:33:35.479857  324968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:33:35.599550  324968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:33:35.615050  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 19:33:35.615120  324968 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 19:33:35.615344  324968 node_ready.go:35] waiting up to 6m0s for node "ha-254035-m04" to be "Ready" ...
	W1017 19:33:37.619036  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	W1017 19:33:39.619924  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	W1017 19:33:42.120954  324968 node_ready.go:57] node "ha-254035-m04" has "Ready":"Unknown" status (will retry)
	I1017 19:33:42.619614  324968 node_ready.go:49] node "ha-254035-m04" is "Ready"
	I1017 19:33:42.619639  324968 node_ready.go:38] duration metric: took 7.004273155s for node "ha-254035-m04" to be "Ready" ...
	I1017 19:33:42.619652  324968 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:33:42.619704  324968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:33:42.643671  324968 system_svc.go:56] duration metric: took 24.010635ms WaitForService to wait for kubelet
	I1017 19:33:42.643702  324968 kubeadm.go:586] duration metric: took 7.172435361s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:33:42.643720  324968 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:33:42.658471  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658503  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658515  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658520  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658524  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658528  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658532  324968 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:33:42.658536  324968 node_conditions.go:123] node cpu capacity is 2
	I1017 19:33:42.658541  324968 node_conditions.go:105] duration metric: took 14.815335ms to run NodePressure ...
	I1017 19:33:42.658553  324968 start.go:241] waiting for startup goroutines ...
	I1017 19:33:42.658578  324968 start.go:255] writing updated cluster config ...
	I1017 19:33:42.658896  324968 ssh_runner.go:195] Run: rm -f paused
	I1017 19:33:42.666036  324968 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:33:42.666578  324968 kapi.go:59] client config for ha-254035: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/ha-254035/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 19:33:42.748115  324968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gfklr" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.799614  324968 pod_ready.go:94] pod "coredns-66bc5c9577-gfklr" is "Ready"
	I1017 19:33:42.799652  324968 pod_ready.go:86] duration metric: took 51.505206ms for pod "coredns-66bc5c9577-gfklr" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.799662  324968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wbgc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.845846  324968 pod_ready.go:94] pod "coredns-66bc5c9577-wbgc8" is "Ready"
	I1017 19:33:42.845885  324968 pod_ready.go:86] duration metric: took 46.206115ms for pod "coredns-66bc5c9577-wbgc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.863051  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.871909  324968 pod_ready.go:94] pod "etcd-ha-254035" is "Ready"
	I1017 19:33:42.871935  324968 pod_ready.go:86] duration metric: took 8.855813ms for pod "etcd-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.871945  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.880198  324968 pod_ready.go:94] pod "etcd-ha-254035-m02" is "Ready"
	I1017 19:33:42.880226  324968 pod_ready.go:86] duration metric: took 8.274439ms for pod "etcd-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:42.880236  324968 pod_ready.go:83] waiting for pod "etcd-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.067322  324968 request.go:683] "Waited before sending request" delay="183.325668ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:43.071041  324968 pod_ready.go:94] pod "etcd-ha-254035-m03" is "Ready"
	I1017 19:33:43.071067  324968 pod_ready.go:86] duration metric: took 190.824595ms for pod "etcd-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.267504  324968 request.go:683] "Waited before sending request" delay="196.34087ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1017 19:33:43.271686  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.468020  324968 request.go:683] "Waited before sending request" delay="196.217403ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035"
	I1017 19:33:43.666979  324968 request.go:683] "Waited before sending request" delay="194.232504ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:43.670115  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035" is "Ready"
	I1017 19:33:43.670144  324968 pod_ready.go:86] duration metric: took 398.430494ms for pod "kube-apiserver-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.670153  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:43.867552  324968 request.go:683] "Waited before sending request" delay="197.322859ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035-m02"
	I1017 19:33:44.067901  324968 request.go:683] "Waited before sending request" delay="193.273769ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:44.071414  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035-m02" is "Ready"
	I1017 19:33:44.071442  324968 pod_ready.go:86] duration metric: took 401.282299ms for pod "kube-apiserver-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.071453  324968 pod_ready.go:83] waiting for pod "kube-apiserver-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.267920  324968 request.go:683] "Waited before sending request" delay="196.393406ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-254035-m03"
	I1017 19:33:44.467967  324968 request.go:683] "Waited before sending request" delay="196.317182ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:44.472041  324968 pod_ready.go:94] pod "kube-apiserver-ha-254035-m03" is "Ready"
	I1017 19:33:44.472068  324968 pod_ready.go:86] duration metric: took 400.608635ms for pod "kube-apiserver-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.667472  324968 request.go:683] "Waited before sending request" delay="195.295893ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1017 19:33:44.671549  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:44.868014  324968 request.go:683] "Waited before sending request" delay="196.366601ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035"
	I1017 19:33:45.067086  324968 request.go:683] "Waited before sending request" delay="193.311224ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:45.072221  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035" is "Ready"
	I1017 19:33:45.072250  324968 pod_ready.go:86] duration metric: took 400.67411ms for pod "kube-controller-manager-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.072261  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.267682  324968 request.go:683] "Waited before sending request" delay="195.335416ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035-m02"
	I1017 19:33:45.467614  324968 request.go:683] "Waited before sending request" delay="188.393045ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:45.470975  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035-m02" is "Ready"
	I1017 19:33:45.471007  324968 pod_ready.go:86] duration metric: took 398.736291ms for pod "kube-controller-manager-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.471017  324968 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:45.667358  324968 request.go:683] "Waited before sending request" delay="196.263104ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-254035-m03"
	I1017 19:33:45.867478  324968 request.go:683] "Waited before sending request" delay="196.63098ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:45.870372  324968 pod_ready.go:94] pod "kube-controller-manager-ha-254035-m03" is "Ready"
	I1017 19:33:45.870427  324968 pod_ready.go:86] duration metric: took 399.402071ms for pod "kube-controller-manager-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.067916  324968 request.go:683] "Waited before sending request" delay="197.353037ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1017 19:33:46.071965  324968 pod_ready.go:83] waiting for pod "kube-proxy-548b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.267426  324968 request.go:683] "Waited before sending request" delay="195.355338ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-548b2"
	I1017 19:33:46.467392  324968 request.go:683] "Waited before sending request" delay="193.351461ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:46.470716  324968 pod_ready.go:94] pod "kube-proxy-548b2" is "Ready"
	I1017 19:33:46.470745  324968 pod_ready.go:86] duration metric: took 398.750601ms for pod "kube-proxy-548b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.470755  324968 pod_ready.go:83] waiting for pod "kube-proxy-b4fr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.667046  324968 request.go:683] "Waited before sending request" delay="196.219848ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b4fr6"
	I1017 19:33:46.867280  324968 request.go:683] "Waited before sending request" delay="196.299896ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:46.870670  324968 pod_ready.go:94] pod "kube-proxy-b4fr6" is "Ready"
	I1017 19:33:46.870707  324968 pod_ready.go:86] duration metric: took 399.946057ms for pod "kube-proxy-b4fr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:46.870717  324968 pod_ready.go:83] waiting for pod "kube-proxy-fr5ts" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:47.067054  324968 request.go:683] "Waited before sending request" delay="196.240361ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fr5ts"
	I1017 19:33:47.267565  324968 request.go:683] "Waited before sending request" delay="196.190762ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:47.467316  324968 request.go:683] "Waited before sending request" delay="96.206992ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fr5ts"
	I1017 19:33:47.667564  324968 request.go:683] "Waited before sending request" delay="186.261475ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:48.067382  324968 request.go:683] "Waited before sending request" delay="186.267596ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	I1017 19:33:48.467049  324968 request.go:683] "Waited before sending request" delay="92.145258ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m04"
	W1017 19:33:48.877689  324968 pod_ready.go:104] pod "kube-proxy-fr5ts" is not "Ready", error: <nil>
	W1017 19:33:50.877808  324968 pod_ready.go:104] pod "kube-proxy-fr5ts" is not "Ready", error: <nil>
	I1017 19:33:52.377837  324968 pod_ready.go:94] pod "kube-proxy-fr5ts" is "Ready"
	I1017 19:33:52.377866  324968 pod_ready.go:86] duration metric: took 5.507143006s for pod "kube-proxy-fr5ts" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.377876  324968 pod_ready.go:83] waiting for pod "kube-proxy-k56cv" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.386625  324968 pod_ready.go:94] pod "kube-proxy-k56cv" is "Ready"
	I1017 19:33:52.386655  324968 pod_ready.go:86] duration metric: took 8.770737ms for pod "kube-proxy-k56cv" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.390245  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.467536  324968 request.go:683] "Waited before sending request" delay="77.200252ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035"
	I1017 19:33:52.667089  324968 request.go:683] "Waited before sending request" delay="193.299146ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035"
	I1017 19:33:52.670454  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035" is "Ready"
	I1017 19:33:52.670484  324968 pod_ready.go:86] duration metric: took 280.216212ms for pod "kube-scheduler-ha-254035" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.670495  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:52.867921  324968 request.go:683] "Waited before sending request" delay="197.327438ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035-m02"
	I1017 19:33:53.067947  324968 request.go:683] "Waited before sending request" delay="195.176914ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m02"
	I1017 19:33:53.072896  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035-m02" is "Ready"
	I1017 19:33:53.072972  324968 pod_ready.go:86] duration metric: took 402.46965ms for pod "kube-scheduler-ha-254035-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.072997  324968 pod_ready.go:83] waiting for pod "kube-scheduler-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.267273  324968 request.go:683] "Waited before sending request" delay="194.142538ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-254035-m03"
	I1017 19:33:53.467118  324968 request.go:683] "Waited before sending request" delay="196.200739ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-254035-m03"
	I1017 19:33:53.470125  324968 pod_ready.go:94] pod "kube-scheduler-ha-254035-m03" is "Ready"
	I1017 19:33:53.470152  324968 pod_ready.go:86] duration metric: took 397.132807ms for pod "kube-scheduler-ha-254035-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 19:33:53.470163  324968 pod_ready.go:40] duration metric: took 10.804092337s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 19:33:53.525625  324968 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 19:33:53.530847  324968 out.go:179] * Done! kubectl is now configured to use "ha-254035" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:33:01 ha-254035 crio[667]: time="2025-10-17T19:33:01.657638061Z" level=info msg="Started container" PID=1327 containerID=e9ece41337b80cfabb4196dc2d55dc644a949f49cd22450cf623b7f5257d5d69 description=kube-system/kindnet-gzzsg/kindnet-cni id=1467213a-df01-47f7-91a8-c9ecfa2692be name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe908ac1b77150ea99b48733349b105097380b5cd2e2f243156591744040d978
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.209485703Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.212893465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.212927827Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.21295117Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216661947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216697064Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.216721523Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220161292Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220191347Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.220215756Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.223221953Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:33:12 ha-254035 crio[667]: time="2025-10-17T19:33:12.223254084Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:33:27 ha-254035 conmon[1135]: conmon 0cc2287088bc871e7f4d <ninfo>: container 1139 exited with status 1
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.068588792Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b7b509f3-b012-49ed-9e6d-e0ab750c4b6b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.07344856Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25fe3696-e90b-4a83-a3ad-33aa6af72f3d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.077367011Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=28e7f811-dec4-4fcb-9722-3a341888b632 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.077693042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.096972398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.097208428Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/17cd3234a8a982607354e16eb6b88983eecf7edea137eb96fbc8cd597e6577e2/merged/etc/passwd: no such file or directory"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.09724453Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/17cd3234a8a982607354e16eb6b88983eecf7edea137eb96fbc8cd597e6577e2/merged/etc/group: no such file or directory"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.108385903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.143116992Z" level=info msg="Created container f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e: kube-system/storage-provisioner/storage-provisioner" id=28e7f811-dec4-4fcb-9722-3a341888b632 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.144104625Z" level=info msg="Starting container: f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e" id=e482d8e9-fc6c-4e49-a1a6-8af83382da5d name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 19:33:28 ha-254035 crio[667]: time="2025-10-17T19:33:28.153409034Z" level=info msg="Started container" PID=1450 containerID=f03a6dda4443a7ca4881c99c1a1b1d649515e8a1e7c9d51bf1fad01a41e7083e description=kube-system/storage-provisioner/storage-provisioner id=e482d8e9-fc6c-4e49-a1a6-8af83382da5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	f03a6dda4443a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Running             storage-provisioner       4                   ebb6a1f53c483       storage-provisioner                 kube-system
	e9ece41337b80       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago       Running             kindnet-cni               2                   fe908ac1b7715       kindnet-gzzsg                       kube-system
	83532ba0435f2       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago       Running             busybox                   2                   0240e4c18c32a       busybox-7b57f96db7-nc6x2            default
	db8d02bae2fa1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   2                   507d7b819debe       coredns-66bc5c9577-wbgc8            kube-system
	706bee2267664       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   2                   c6367bcfd35d4       coredns-66bc5c9577-gfklr            kube-system
	d51ad27d42179       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago       Running             kube-proxy                2                   7bb73f9365e64       kube-proxy-548b2                    kube-system
	0cc2287088bc8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Exited              storage-provisioner       3                   ebb6a1f53c483       storage-provisioner                 kube-system
	cd9dec0514b24       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   2 minutes ago       Running             kube-controller-manager   7                   251b6be3c0c4f       kube-controller-manager-ha-254035   kube-system
	d713edbb381bb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Exited              kube-controller-manager   6                   251b6be3c0c4f       kube-controller-manager-ha-254035   kube-system
	fb534fcdb2d89       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   3 minutes ago       Running             kube-apiserver            3                   0fd33e0b5d3e5       kube-apiserver-ha-254035            kube-system
	ab6180a80f68d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   3 minutes ago       Running             etcd                      2                   bc1edea2f668b       etcd-ha-254035                      kube-system
	c4609fc3fd1c0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   3 minutes ago       Running             kube-scheduler            2                   32d4263a101a2       kube-scheduler-ha-254035            kube-system
	0652fd27f5bff       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   3 minutes ago       Running             kube-vip                  1                   31afc78057fe9       kube-vip-ha-254035                  kube-system
	
	
	==> coredns [706bee22676646b717cd807f92b3341bc3bee9a22195d1a96f63858b9fe3f381] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35042 - 59078 "HINFO IN 7580743585985535806.8578026735020374478. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014332173s
	
	
	==> coredns [db8d02bae2fa1a6f368ea962e35a1111cb4230bcadf4709cf7545ace2d4272d6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35443 - 54421 "HINFO IN 8550404136984308969.4709042246801981974. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015029672s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-254035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_17_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:35:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:32:45 +0000   Fri, 17 Oct 2025 19:32:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-254035
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                eadb5c5f-dcbb-485c-aea7-3aa5b951fd9e
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-nc6x2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-gfklr             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 coredns-66bc5c9577-wbgc8             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-ha-254035                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-gzzsg                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-254035             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-254035    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-548b2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-254035             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-254035                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 2m37s                  kube-proxy       
	  Normal   Starting                 9m36s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           17m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-254035 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m4s                   node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   NodeHasSufficientMemory  3m22s (x8 over 3m22s)  kubelet          Node ha-254035 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m22s (x8 over 3m22s)  kubelet          Node ha-254035 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m22s (x8 over 3m22s)  kubelet          Node ha-254035 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m44s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           2m43s                  node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           2m7s                   node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	  Normal   RegisteredNode           53s                    node-controller  Node ha-254035 event: Registered Node ha-254035 in Controller
	
	
	Name:               ha-254035-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_18_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:18:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:35:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:33:05 +0000   Fri, 17 Oct 2025 19:32:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-254035-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6c5e97e0-fa27-407d-a976-b646e8a40ca5
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6xjlp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-254035-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-vss98                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-254035-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-254035-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-b4fr6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-254035-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-254035-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   RegisteredNode           16m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)      kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             12m                    node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           9m4s                   node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   NodeNotReady             8m14s                  node-controller  Node ha-254035-m02 status is now: NodeNotReady
	  Normal   Starting                 3m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m19s (x8 over 3m19s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m19s (x8 over 3m19s)  kubelet          Node ha-254035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m19s (x8 over 3m19s)  kubelet          Node ha-254035-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m44s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           2m43s                  node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           2m7s                   node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	  Normal   RegisteredNode           53s                    node-controller  Node ha-254035-m02 event: Registered Node ha-254035-m02 in Controller
	
	
	Name:               ha-254035-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_20_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:19:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:35:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:35:09 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:35:09 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:35:09 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:35:09 +0000   Fri, 17 Oct 2025 19:33:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-254035-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2f343c58-0cc9-444a-bc88-7799c3ff52df
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-979zm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-254035-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-2k9kj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-254035-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-254035-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-k56cv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-254035-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-254035-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 112s                   kube-proxy       
	  Normal   RegisteredNode           15m                    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           9m4s                   node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   NodeNotReady             8m14s                  node-controller  Node ha-254035-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           2m44s                  node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           2m43s                  node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Warning  CgroupV1                 2m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 2m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node ha-254035-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node ha-254035-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node ha-254035-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m7s                   node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	  Normal   RegisteredNode           53s                    node-controller  Node ha-254035-m03 event: Registered Node ha-254035-m03 in Controller
	
	
	Name:               ha-254035-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_21_16_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:21:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:35:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:33:42 +0000   Fri, 17 Oct 2025 19:33:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-254035-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                12691412-a8b5-426e-846e-d6161e527ea6
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pwhwv       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-proxy-fr5ts    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 104s                 kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Warning  CgroupV1                 14m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x3 over 14m)    kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x3 over 14m)    kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x3 over 14m)    kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           14m                  node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeReady                13m                  kubelet          Node ha-254035-m04 status is now: NodeReady
	  Normal   RegisteredNode           12m                  node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           9m4s                 node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   NodeNotReady             8m14s                node-controller  Node ha-254035-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m44s                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           2m43s                node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   RegisteredNode           2m7s                 node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	  Normal   Starting                 2m6s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m3s (x8 over 2m6s)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m3s (x8 over 2m6s)  kubelet          Node ha-254035-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s (x8 over 2m6s)  kubelet          Node ha-254035-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                  node-controller  Node ha-254035-m04 event: Registered Node ha-254035-m04 in Controller
	
	
	Name:               ha-254035-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-254035-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=ha-254035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T19_34_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:34:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-254035-m05
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 19:35:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 19:35:27 +0000   Fri, 17 Oct 2025 19:34:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 19:35:27 +0000   Fri, 17 Oct 2025 19:34:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 19:35:27 +0000   Fri, 17 Oct 2025 19:34:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 19:35:27 +0000   Fri, 17 Oct 2025 19:35:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-254035-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                0d42d24d-7b77-4e0b-8b88-c22eb0bbccca
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-254035-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         49s
	  kube-system                 kindnet-6wxsk                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      52s
	  kube-system                 kube-apiserver-ha-254035-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-controller-manager-ha-254035-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-proxy-dschq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-scheduler-ha-254035-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-vip-ha-254035-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        47s   kube-proxy       
	  Normal  RegisteredNode  52s   node-controller  Node ha-254035-m05 event: Registered Node ha-254035-m05 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node ha-254035-m05 event: Registered Node ha-254035-m05 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node ha-254035-m05 event: Registered Node ha-254035-m05 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node ha-254035-m05 event: Registered Node ha-254035-m05 in Controller
	
	
	==> dmesg <==
	[Oct17 18:34] overlayfs: idmapped layers are currently not supported
	[Oct17 18:35] overlayfs: idmapped layers are currently not supported
	[Oct17 18:36] overlayfs: idmapped layers are currently not supported
	[ +20.850590] overlayfs: idmapped layers are currently not supported
	[Oct17 18:38] overlayfs: idmapped layers are currently not supported
	[ +19.812679] overlayfs: idmapped layers are currently not supported
	[Oct17 18:39] overlayfs: idmapped layers are currently not supported
	[ +19.225178] overlayfs: idmapped layers are currently not supported
	[Oct17 18:40] overlayfs: idmapped layers are currently not supported
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 18:57] overlayfs: idmapped layers are currently not supported
	[Oct17 19:03] overlayfs: idmapped layers are currently not supported
	[Oct17 19:04] overlayfs: idmapped layers are currently not supported
	[Oct17 19:17] overlayfs: idmapped layers are currently not supported
	[Oct17 19:18] overlayfs: idmapped layers are currently not supported
	[Oct17 19:19] overlayfs: idmapped layers are currently not supported
	[Oct17 19:21] overlayfs: idmapped layers are currently not supported
	[Oct17 19:22] overlayfs: idmapped layers are currently not supported
	[Oct17 19:23] overlayfs: idmapped layers are currently not supported
	[  +4.119232] overlayfs: idmapped layers are currently not supported
	[Oct17 19:32] overlayfs: idmapped layers are currently not supported
	[  +2.727676] overlayfs: idmapped layers are currently not supported
	[ +41.644994] overlayfs: idmapped layers are currently not supported
	[Oct17 19:33] overlayfs: idmapped layers are currently not supported
	[Oct17 19:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ab6180a80f68dcb65397cf72c97a3f14b4b536aa865a3b252a4a6ebf62d58b59] <==
	{"level":"warn","ts":"2025-10-17T19:34:31.290023Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:34:31.392500Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d7447b558ebb0f55","error":"failed to write d7447b558ebb0f55 on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:36664: write: connection reset by peer)"}
	{"level":"warn","ts":"2025-10-17T19:34:31.393059Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.611714Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.685499Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"d7447b558ebb0f55","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-17T19:34:31.685558Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.785401Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.913669Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"d7447b558ebb0f55","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-10-17T19:34:31.913713Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.913724Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:31.917918Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"d7447b558ebb0f55"}
	{"level":"info","ts":"2025-10-17T19:34:43.257432Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"warn","ts":"2025-10-17T19:34:44.259903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.910342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-5brbl\" limit:1 ","response":"range_response_count:1 size:3431"}
	{"level":"info","ts":"2025-10-17T19:34:44.260014Z","caller":"traceutil/trace.go:172","msg":"trace[1324092327] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-5brbl; range_end:; response_count:1; response_revision:3530; }","duration":"113.029945ms","start":"2025-10-17T19:34:44.146970Z","end":"2025-10-17T19:34:44.260000Z","steps":["trace[1324092327] 'agreement among raft nodes before linearized reading'  (duration: 112.817177ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:34:44.260233Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.277353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-17T19:34:44.260445Z","caller":"traceutil/trace.go:172","msg":"trace[1645675066] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3530; }","duration":"113.488053ms","start":"2025-10-17T19:34:44.146945Z","end":"2025-10-17T19:34:44.260433Z","steps":["trace[1645675066] 'agreement among raft nodes before linearized reading'  (duration: 113.25154ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:34:44.260706Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.783182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-6wxsk\" limit:1 ","response":"range_response_count:1 size:3694"}
	{"level":"info","ts":"2025-10-17T19:34:44.273502Z","caller":"traceutil/trace.go:172","msg":"trace[528875741] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-6wxsk; range_end:; response_count:1; response_revision:3530; }","duration":"126.569497ms","start":"2025-10-17T19:34:44.146909Z","end":"2025-10-17T19:34:44.273479Z","steps":["trace[528875741] 'agreement among raft nodes before linearized reading'  (duration: 113.720915ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:34:44.263030Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.203225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-h77vc\" limit:1 ","response":"range_response_count:1 size:4099"}
	{"level":"info","ts":"2025-10-17T19:34:44.273809Z","caller":"traceutil/trace.go:172","msg":"trace[249640023] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-h77vc; range_end:; response_count:1; response_revision:3530; }","duration":"126.986599ms","start":"2025-10-17T19:34:44.146813Z","end":"2025-10-17T19:34:44.273800Z","steps":["trace[249640023] 'agreement among raft nodes before linearized reading'  (duration: 114.610821ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-17T19:34:44.263260Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.647232ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-gl87l\" limit:1 ","response":"range_response_count:1 size:3694"}
	{"level":"info","ts":"2025-10-17T19:34:44.276382Z","caller":"traceutil/trace.go:172","msg":"trace[1762693465] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-gl87l; range_end:; response_count:1; response_revision:3530; }","duration":"129.479699ms","start":"2025-10-17T19:34:44.146889Z","end":"2025-10-17T19:34:44.276369Z","steps":["trace[1762693465] 'agreement among raft nodes before linearized reading'  (duration: 114.22256ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-17T19:34:44.609239Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-17T19:34:47.700295Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-17T19:35:01.023513Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"d7447b558ebb0f55","bytes":6735756,"size":"6.7 MB","took":"31.368053313s"}
	
	
	==> kernel <==
	 19:35:36 up  2:18,  0 user,  load average: 3.01, 2.69, 1.87
	Linux ha-254035 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9ece41337b80cfabb4196dc2d55dc644a949f49cd22450cf623b7f5257d5d69] <==
	I1017 19:35:12.208062       1 main.go:324] Node ha-254035-m05 has CIDR [10.244.4.0/24] 
	I1017 19:35:12.208475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:35:12.208588       1 main.go:301] handling current node
	I1017 19:35:12.208629       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:35:12.208671       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:35:22.215537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:35:22.215640       1 main.go:301] handling current node
	I1017 19:35:22.215678       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:35:22.215707       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:35:22.215912       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:35:22.215951       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:35:22.216044       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:35:22.216059       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	I1017 19:35:22.216115       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1017 19:35:22.216128       1 main.go:324] Node ha-254035-m05 has CIDR [10.244.4.0/24] 
	I1017 19:35:32.209726       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1017 19:35:32.209764       1 main.go:324] Node ha-254035-m05 has CIDR [10.244.4.0/24] 
	I1017 19:35:32.209940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:35:32.209959       1 main.go:301] handling current node
	I1017 19:35:32.209972       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 19:35:32.209976       1 main.go:324] Node ha-254035-m02 has CIDR [10.244.1.0/24] 
	I1017 19:35:32.210055       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 19:35:32.210068       1 main.go:324] Node ha-254035-m03 has CIDR [10.244.2.0/24] 
	I1017 19:35:32.210173       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 19:35:32.210185       1 main.go:324] Node ha-254035-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fb534fcdb2d895a4c9c908d2c41c5a3a49e1ba7a9a8c54cca3e0f68236d86194] <==
	{"level":"warn","ts":"2025-10-17T19:32:45.556106Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001deba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-17T19:32:45.556124Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40028872c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1017 19:32:45.742745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 19:32:45.761612       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 19:32:45.766614       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 19:32:45.766727       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:32:45.766874       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 19:32:45.766889       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:32:45.772156       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:32:45.782338       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:32:45.782660       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:32:45.782735       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:32:45.786264       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:32:45.801116       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:32:45.801154       1 policy_source.go:240] refreshing policies
	I1017 19:32:45.801215       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:32:45.801340       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:32:45.823912       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1017 19:32:45.892067       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:32:46.104708       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 19:32:51.664034       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 19:32:51.782010       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 19:32:51.908184       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 19:32:52.058599       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 19:32:52.107924       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [cd9dec0514b2422e9e0e06a464213e0f38cdfce11c6ca20c97c479d028fcac71] <==
	I1017 19:32:51.704899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 19:32:51.705461       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:32:51.705774       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 19:32:51.705860       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 19:32:51.707308       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 19:32:51.708143       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:32:51.708196       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 19:32:51.713230       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 19:32:51.722295       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:32:51.793811       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m04"
	I1017 19:32:51.793885       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035"
	I1017 19:32:51.793911       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m02"
	I1017 19:32:51.793948       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m03"
	I1017 19:32:51.794411       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1017 19:32:56.794689       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 19:33:32.102831       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-m4bp9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-m4bp9\": the object has been modified; please apply your changes to the latest version and try again"
	I1017 19:33:32.116286       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9bc45666-7349-43f1-b1bc-8fe50797293b", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-m4bp9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-m4bp9": the object has been modified; please apply your changes to the latest version and try again
	I1017 19:33:42.572582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	E1017 19:34:43.072957       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-xwhmv failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-xwhmv\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1017 19:34:43.102626       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-xwhmv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-xwhmv\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1017 19:34:43.810556       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-254035-m05\" does not exist"
	I1017 19:34:43.811708       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	I1017 19:34:43.843170       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-254035-m05" podCIDRs=["10.244.4.0/24"]
	I1017 19:34:46.847409       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-254035-m05"
	I1017 19:35:28.007060       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-254035-m04"
	
	
	==> kube-controller-manager [d713edbb381bb7ac4baa67d925ebd85ec5ab61fa9319db2f03ba47d667e26940] <==
	I1017 19:32:15.577934       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:32:17.585378       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 19:32:17.585478       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:32:17.587388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 19:32:17.588088       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 19:32:17.588254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:32:17.588373       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1017 19:32:32.131519       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [d51ad27d42179adee09ff705d12ad5d15a734809e4732ad3eb1c4429dc7021e6] <==
	I1017 19:32:57.743934       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:32:57.902619       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:32:57.934204       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:32:57.934232       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:32:57.934302       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:32:58.002595       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:32:58.002661       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:32:58.008742       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:32:58.009306       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:32:58.009381       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:32:58.011974       1 config.go:200] "Starting service config controller"
	I1017 19:32:58.011999       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:32:58.021529       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:32:58.021612       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:32:58.021667       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:32:58.021695       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:32:58.021970       1 config.go:309] "Starting node config controller"
	I1017 19:32:58.021993       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:32:58.112358       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:32:58.122792       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 19:32:58.122780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:32:58.122830       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [c4609fc3fd1c0d5440395e0986380eb9eb076a0e1e1faa4ad132e67cd913032d] <==
	E1017 19:34:44.161707       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zh56p\": pod kube-proxy-zh56p is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-zh56p"
	I1017 19:34:44.164605       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zh56p" node="ha-254035-m05"
	E1017 19:34:44.195631       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dschq\": pod kube-proxy-dschq is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dschq" node="ha-254035-m05"
	E1017 19:34:44.200978       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d8f101a4-5151-4c21-8b54-e5bb2097eda0(kube-system/kube-proxy-dschq) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-dschq"
	E1017 19:34:44.201073       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dschq\": pod kube-proxy-dschq is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-dschq"
	I1017 19:34:44.202488       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dschq" node="ha-254035-m05"
	E1017 19:34:44.277021       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5brbl\": pod kube-proxy-5brbl is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5brbl" node="ha-254035-m05"
	E1017 19:34:44.278335       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 0ce3c1ba-82f7-47c9-863a-b2da2399dcaa(kube-system/kube-proxy-5brbl) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-5brbl"
	E1017 19:34:44.278456       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5brbl\": pod kube-proxy-5brbl is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-5brbl"
	I1017 19:34:44.288718       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5brbl" node="ha-254035-m05"
	E1017 19:34:44.292265       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6wxsk\": pod kindnet-6wxsk is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-6wxsk" node="ha-254035-m05"
	E1017 19:34:44.292461       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d246ba27-9741-4566-ad25-03513a959e1f(kube-system/kindnet-6wxsk) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-6wxsk"
	E1017 19:34:44.293474       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6wxsk\": pod kindnet-6wxsk is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kindnet-6wxsk"
	E1017 19:34:44.292386       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-h77vc\": pod kindnet-h77vc is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-h77vc" node="ha-254035-m05"
	E1017 19:34:44.294925       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod daef16ff-3a08-48e6-bab5-f2be670e34d1(kube-system/kindnet-h77vc) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-h77vc"
	E1017 19:34:44.295786       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-h77vc\": pod kindnet-h77vc is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kindnet-h77vc"
	E1017 19:34:44.292415       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gl87l\": pod kindnet-gl87l is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-gl87l" node="ha-254035-m05"
	E1017 19:34:44.295845       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 125dbd8c-395b-479d-9509-5f1253f028f6(kube-system/kindnet-gl87l) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-gl87l"
	I1017 19:34:44.294690       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6wxsk" node="ha-254035-m05"
	E1017 19:34:44.311928       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gl87l\": pod kindnet-gl87l is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kindnet-gl87l"
	I1017 19:34:44.312035       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gl87l" node="ha-254035-m05"
	I1017 19:34:44.312481       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-h77vc" node="ha-254035-m05"
	E1017 19:34:45.117913       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ztwbl\": pod kube-proxy-ztwbl is already assigned to node \"ha-254035-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ztwbl" node="ha-254035-m05"
	E1017 19:34:45.118002       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ztwbl\": pod kube-proxy-ztwbl is already assigned to node \"ha-254035-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-ztwbl"
	E1017 19:34:45.265630       1 pod_status_patch.go:111] "Failed to patch pod status" err="pods \"kube-proxy-ztwbl\" not found" pod="kube-system/kube-proxy-ztwbl"
	
	
	==> kubelet <==
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.424411     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.424463     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.425112     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238 WatchSource:0}: Error finding container ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238: Status 404 returned error can't find the container with id ebb6a1f53c4835f98f170cb0cc9a8c381e017f19896c6a29b18d262526414238
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.428343     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(4784cc20-6df7-4e32-bbfa-e0b3be4a1e83): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.428384     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="4784cc20-6df7-4e32-bbfa-e0b3be4a1e83"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.433597     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6 WatchSource:0}: Error finding container 507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6: Status 404 returned error can't find the container with id 507d7b819debe5b3cd335ff315e790595f8a73c05cf49258f5a95ad85018e8b6
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.441352     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.441397     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.442165     802 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-254035\" already exists" pod="kube-system/kube-scheduler-ha-254035"
	Oct 17 19:32:46 ha-254035 kubelet[802]: W1017 19:32:46.458234     802 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio-0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673 WatchSource:0}: Error finding container 0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673: Status 404 returned error can't find the container with id 0240e4c18c32a113147b1316d44dc028805e98a9876780111398a33d445c8673
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.468716     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-nc6x2_default(4ced2553-3c5f-4d67-ad3c-2ed34ab319ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.468759     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-nc6x2" podUID="4ced2553-3c5f-4d67-ad3c-2ed34ab319ef"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.722833     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-nc6x2_default(4ced2553-3c5f-4d67-ad3c-2ed34ab319ef): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.741101     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-nc6x2" podUID="4ced2553-3c5f-4d67-ad3c-2ed34ab319ef"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.749534     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-gfklr_kube-system(8bf2b43b-91c9-4531-a571-36060412860e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755626     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-gfklr" podUID="8bf2b43b-91c9-4531-a571-36060412860e"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755218     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(4784cc20-6df7-4e32-bbfa-e0b3be4a1e83): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755307     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-gzzsg_kube-system(9d09bb8e-ddb5-4533-9215-83fefb05a7eb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755390     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-548b2_kube-system(4b772887-90df-4871-9343-69349bdda859): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.755118     802 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-wbgc8_kube-system(8e82e918-326c-4295-82ea-e35a31f64287): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757120     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-wbgc8" podUID="8e82e918-326c-4295-82ea-e35a31f64287"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757234     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-gzzsg" podUID="9d09bb8e-ddb5-4533-9215-83fefb05a7eb"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757252     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="4784cc20-6df7-4e32-bbfa-e0b3be4a1e83"
	Oct 17 19:32:46 ha-254035 kubelet[802]: E1017 19:32:46.757271     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-548b2" podUID="4b772887-90df-4871-9343-69349bdda859"
	Oct 17 19:33:28 ha-254035 kubelet[802]: I1017 19:33:28.066788     802 scope.go:117] "RemoveContainer" containerID="0cc2287088bc871e7f4dd5ef5a425a95862343c93ae9b170eadd77d685735b39"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-254035 -n ha-254035
helpers_test.go:269: (dbg) Run:  kubectl --context ha-254035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.44s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.3s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-999484 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-999484 --output=json --user=testUser: exit status 80 (2.300709567s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eab396fd-450e-4e50-875a-36a14a717155","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-999484 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"63fbdfdf-d469-49b9-8456-98154993a315","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-17T19:37:13Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"b7aefd04-f177-411b-a5b0-7c0a86f03bfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-999484 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.30s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-999484 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-999484 --output=json --user=testUser: exit status 80 (2.043994678s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0b950a03-e66e-4f8f-9967-c1f11c28b635","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-999484 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"88d62e7c-3c55-4909-b782-d83b7c7f9529","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-17T19:37:15Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"6c1410a0-2602-4b70-9616-2c46f1e43d9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-999484 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.04s)

                                                
                                    
x
+
TestPause/serial/Pause (6.45s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-217784 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-217784 --alsologtostderr -v=5: exit status 80 (1.770710983s)

                                                
                                                
-- stdout --
	* Pausing node pause-217784 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:00:41.925880  437651 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:00:41.926675  437651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:41.926695  437651 out.go:374] Setting ErrFile to fd 2...
	I1017 20:00:41.926701  437651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:41.926997  437651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:00:41.927285  437651 out.go:368] Setting JSON to false
	I1017 20:00:41.927313  437651 mustload.go:65] Loading cluster: pause-217784
	I1017 20:00:41.927762  437651 config.go:182] Loaded profile config "pause-217784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:41.928232  437651 cli_runner.go:164] Run: docker container inspect pause-217784 --format={{.State.Status}}
	I1017 20:00:41.945637  437651 host.go:66] Checking if "pause-217784" exists ...
	I1017 20:00:41.945964  437651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:00:42.019994  437651 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-17 20:00:42.007206229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:00:42.020865  437651 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-217784 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:00:42.023952  437651 out.go:179] * Pausing node pause-217784 ... 
	I1017 20:00:42.027825  437651 host.go:66] Checking if "pause-217784" exists ...
	I1017 20:00:42.028191  437651 ssh_runner.go:195] Run: systemctl --version
	I1017 20:00:42.028236  437651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-217784
	I1017 20:00:42.050444  437651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33384 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/pause-217784/id_rsa Username:docker}
	I1017 20:00:42.173672  437651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:00:42.192011  437651 pause.go:52] kubelet running: true
	I1017 20:00:42.192089  437651 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:00:42.467949  437651 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:00:42.468034  437651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:00:42.539401  437651 cri.go:89] found id: "0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006"
	I1017 20:00:42.539425  437651 cri.go:89] found id: "62588c2119e6d0606bb21463fe47ed8567945aaf80b48732876e92bd3aac6d3c"
	I1017 20:00:42.539430  437651 cri.go:89] found id: "f9b2cb8da0165e5e84d72b243c5e3fd7d4e8e1dc2acf5e407090f93c881f74d2"
	I1017 20:00:42.539434  437651 cri.go:89] found id: "96b630dc738baaa3ae91f61e89650eaff48265721a8893be95ca1c3b57d64c6e"
	I1017 20:00:42.539437  437651 cri.go:89] found id: "ac7ec3d90033a6dde87d8b2bc23b9d6e5c887a94e0db8a34e9e454c1ad12f17a"
	I1017 20:00:42.539441  437651 cri.go:89] found id: "3fd4475a37d18c00aec1ef703d573e6e5fb6655507ad68d1fca8ae80ede45d04"
	I1017 20:00:42.539444  437651 cri.go:89] found id: "674ea3bea7ff943a48ca4af34bde9cc6f0e26dd205525997435f1c2327b22556"
	I1017 20:00:42.539447  437651 cri.go:89] found id: "35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	I1017 20:00:42.539450  437651 cri.go:89] found id: "f65fdf97b4d906f7856f0df6988ecb8924864dd7377a0f64601e508eb40b7458"
	I1017 20:00:42.539458  437651 cri.go:89] found id: "d3af5d8cf3e85823f42bfa25e4df0cbc4644772954310529fa40dc6570250b0c"
	I1017 20:00:42.539461  437651 cri.go:89] found id: "a2dfb5e26ac71f5212fffeb91e67e0e371348b88a23fa9cba8152e7f4ac1cc12"
	I1017 20:00:42.539464  437651 cri.go:89] found id: "e00ec461553354a63089e70d55be3852e68c0e75fb8407e6ddbd77706f937bb5"
	I1017 20:00:42.539468  437651 cri.go:89] found id: "b5d8399275d880bb3281f1eef3884a684e6c9909d2b4a7142a465337ebb920e3"
	I1017 20:00:42.539471  437651 cri.go:89] found id: "8142d317a44bbe309ab0386847561af9cc42546e023be675b04898c245530117"
	I1017 20:00:42.539475  437651 cri.go:89] found id: ""
	I1017 20:00:42.539523  437651 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:00:42.550767  437651 retry.go:31] will retry after 329.169004ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:42Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:00:42.880273  437651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:00:42.892854  437651 pause.go:52] kubelet running: false
	I1017 20:00:42.892914  437651 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:00:43.039951  437651 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:00:43.040031  437651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:00:43.125950  437651 cri.go:89] found id: "0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006"
	I1017 20:00:43.125976  437651 cri.go:89] found id: "62588c2119e6d0606bb21463fe47ed8567945aaf80b48732876e92bd3aac6d3c"
	I1017 20:00:43.125982  437651 cri.go:89] found id: "f9b2cb8da0165e5e84d72b243c5e3fd7d4e8e1dc2acf5e407090f93c881f74d2"
	I1017 20:00:43.125986  437651 cri.go:89] found id: "96b630dc738baaa3ae91f61e89650eaff48265721a8893be95ca1c3b57d64c6e"
	I1017 20:00:43.125990  437651 cri.go:89] found id: "ac7ec3d90033a6dde87d8b2bc23b9d6e5c887a94e0db8a34e9e454c1ad12f17a"
	I1017 20:00:43.125999  437651 cri.go:89] found id: "3fd4475a37d18c00aec1ef703d573e6e5fb6655507ad68d1fca8ae80ede45d04"
	I1017 20:00:43.126002  437651 cri.go:89] found id: "674ea3bea7ff943a48ca4af34bde9cc6f0e26dd205525997435f1c2327b22556"
	I1017 20:00:43.126005  437651 cri.go:89] found id: "35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	I1017 20:00:43.126008  437651 cri.go:89] found id: "f65fdf97b4d906f7856f0df6988ecb8924864dd7377a0f64601e508eb40b7458"
	I1017 20:00:43.126013  437651 cri.go:89] found id: "d3af5d8cf3e85823f42bfa25e4df0cbc4644772954310529fa40dc6570250b0c"
	I1017 20:00:43.126017  437651 cri.go:89] found id: "a2dfb5e26ac71f5212fffeb91e67e0e371348b88a23fa9cba8152e7f4ac1cc12"
	I1017 20:00:43.126020  437651 cri.go:89] found id: "e00ec461553354a63089e70d55be3852e68c0e75fb8407e6ddbd77706f937bb5"
	I1017 20:00:43.126023  437651 cri.go:89] found id: "b5d8399275d880bb3281f1eef3884a684e6c9909d2b4a7142a465337ebb920e3"
	I1017 20:00:43.126026  437651 cri.go:89] found id: "8142d317a44bbe309ab0386847561af9cc42546e023be675b04898c245530117"
	I1017 20:00:43.126029  437651 cri.go:89] found id: ""
	I1017 20:00:43.126086  437651 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:00:43.137859  437651 retry.go:31] will retry after 234.960679ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:43Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:00:43.373353  437651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:00:43.385954  437651 pause.go:52] kubelet running: false
	I1017 20:00:43.386038  437651 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:00:43.539657  437651 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:00:43.539758  437651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:00:43.604576  437651 cri.go:89] found id: "0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006"
	I1017 20:00:43.604649  437651 cri.go:89] found id: "62588c2119e6d0606bb21463fe47ed8567945aaf80b48732876e92bd3aac6d3c"
	I1017 20:00:43.604669  437651 cri.go:89] found id: "f9b2cb8da0165e5e84d72b243c5e3fd7d4e8e1dc2acf5e407090f93c881f74d2"
	I1017 20:00:43.604694  437651 cri.go:89] found id: "96b630dc738baaa3ae91f61e89650eaff48265721a8893be95ca1c3b57d64c6e"
	I1017 20:00:43.604723  437651 cri.go:89] found id: "ac7ec3d90033a6dde87d8b2bc23b9d6e5c887a94e0db8a34e9e454c1ad12f17a"
	I1017 20:00:43.604748  437651 cri.go:89] found id: "3fd4475a37d18c00aec1ef703d573e6e5fb6655507ad68d1fca8ae80ede45d04"
	I1017 20:00:43.604768  437651 cri.go:89] found id: "674ea3bea7ff943a48ca4af34bde9cc6f0e26dd205525997435f1c2327b22556"
	I1017 20:00:43.604788  437651 cri.go:89] found id: "35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	I1017 20:00:43.604812  437651 cri.go:89] found id: "f65fdf97b4d906f7856f0df6988ecb8924864dd7377a0f64601e508eb40b7458"
	I1017 20:00:43.604848  437651 cri.go:89] found id: "d3af5d8cf3e85823f42bfa25e4df0cbc4644772954310529fa40dc6570250b0c"
	I1017 20:00:43.604885  437651 cri.go:89] found id: "a2dfb5e26ac71f5212fffeb91e67e0e371348b88a23fa9cba8152e7f4ac1cc12"
	I1017 20:00:43.604905  437651 cri.go:89] found id: "e00ec461553354a63089e70d55be3852e68c0e75fb8407e6ddbd77706f937bb5"
	I1017 20:00:43.604925  437651 cri.go:89] found id: "b5d8399275d880bb3281f1eef3884a684e6c9909d2b4a7142a465337ebb920e3"
	I1017 20:00:43.604946  437651 cri.go:89] found id: "8142d317a44bbe309ab0386847561af9cc42546e023be675b04898c245530117"
	I1017 20:00:43.604972  437651 cri.go:89] found id: ""
	I1017 20:00:43.605036  437651 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:00:43.619456  437651 out.go:203] 
	W1017 20:00:43.622320  437651 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:00:43.622340  437651 out.go:285] * 
	* 
	W1017 20:00:43.629064  437651 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:00:43.632268  437651 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-217784 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-217784
helpers_test.go:243: (dbg) docker inspect pause-217784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653",
	        "Created": "2025-10-17T19:57:47.297819224Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 428896,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:57:47.373128715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653/hostname",
	        "HostsPath": "/var/lib/docker/containers/ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653/hosts",
	        "LogPath": "/var/lib/docker/containers/ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653/ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653-json.log",
	        "Name": "/pause-217784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-217784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-217784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653",
	                "LowerDir": "/var/lib/docker/overlay2/a72cb925ebcd3ece39dae78f951907d69cb82d05155d243ef98d68b95e77f716-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a72cb925ebcd3ece39dae78f951907d69cb82d05155d243ef98d68b95e77f716/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a72cb925ebcd3ece39dae78f951907d69cb82d05155d243ef98d68b95e77f716/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a72cb925ebcd3ece39dae78f951907d69cb82d05155d243ef98d68b95e77f716/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-217784",
	                "Source": "/var/lib/docker/volumes/pause-217784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-217784",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-217784",
	                "name.minikube.sigs.k8s.io": "pause-217784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "248746af21cdd29bb6e8f897f18b9cf6f18c72db05e809a6c275b1eaa13f3461",
	            "SandboxKey": "/var/run/docker/netns/248746af21cd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-217784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:42:30:8f:2d:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2a339a9ea8f3c55549b9d606422f7421496e172da373a40f46136b43005fd030",
	                    "EndpointID": "64a6d668ce0df35fc385d7ae1b02a527ccfc2f8dd97b56074b046a03bea7c883",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-217784",
	                        "ab010eed84dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-217784 -n pause-217784
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-217784 -n pause-217784: exit status 2 (342.574219ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-217784 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-217784 logs -n 25: (1.615271547s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-731142 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:54 UTC │
	│ delete  │ -p NoKubernetes-731142                                                                                                                   │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:54 UTC │
	│ start   │ -p NoKubernetes-731142 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:54 UTC │
	│ ssh     │ -p NoKubernetes-731142 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │                     │
	│ start   │ -p missing-upgrade-672083 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-672083    │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:55 UTC │
	│ stop    │ -p NoKubernetes-731142                                                                                                                   │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:54 UTC │
	│ start   │ -p NoKubernetes-731142 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:55 UTC │
	│ ssh     │ -p NoKubernetes-731142 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │                     │
	│ delete  │ -p NoKubernetes-731142                                                                                                                   │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 19:55 UTC │
	│ start   │ -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-819667 │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 19:55 UTC │
	│ delete  │ -p missing-upgrade-672083                                                                                                                │ missing-upgrade-672083    │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 19:55 UTC │
	│ start   │ -p stopped-upgrade-771448 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-771448    │ jenkins │ v1.32.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 19:56 UTC │
	│ stop    │ -p kubernetes-upgrade-819667                                                                                                             │ kubernetes-upgrade-819667 │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 19:55 UTC │
	│ start   │ -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-819667 │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 20:00 UTC │
	│ stop    │ stopped-upgrade-771448 stop                                                                                                              │ stopped-upgrade-771448    │ jenkins │ v1.32.0 │ 17 Oct 25 19:56 UTC │ 17 Oct 25 19:56 UTC │
	│ start   │ -p stopped-upgrade-771448 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-771448    │ jenkins │ v1.37.0 │ 17 Oct 25 19:56 UTC │ 17 Oct 25 19:56 UTC │
	│ delete  │ -p stopped-upgrade-771448                                                                                                                │ stopped-upgrade-771448    │ jenkins │ v1.37.0 │ 17 Oct 25 19:56 UTC │ 17 Oct 25 19:56 UTC │
	│ start   │ -p running-upgrade-866281 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-866281    │ jenkins │ v1.32.0 │ 17 Oct 25 19:56 UTC │ 17 Oct 25 19:57 UTC │
	│ start   │ -p running-upgrade-866281 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-866281    │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ delete  │ -p running-upgrade-866281                                                                                                                │ running-upgrade-866281    │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ start   │ -p pause-217784 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-217784              │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:59 UTC │
	│ start   │ -p pause-217784 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-217784              │ jenkins │ v1.37.0 │ 17 Oct 25 19:59 UTC │ 17 Oct 25 20:00 UTC │
	│ start   │ -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-819667 │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ start   │ -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-819667 │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ pause   │ -p pause-217784 --alsologtostderr -v=5                                                                                                   │ pause-217784              │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:00:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:00:29.334367  436648 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:00:29.334523  436648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:29.334536  436648 out.go:374] Setting ErrFile to fd 2...
	I1017 20:00:29.334569  436648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:29.334869  436648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:00:29.336199  436648 out.go:368] Setting JSON to false
	I1017 20:00:29.337356  436648 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9780,"bootTime":1760721449,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:00:29.337431  436648 start.go:141] virtualization:  
	I1017 20:00:29.340642  436648 out.go:179] * [kubernetes-upgrade-819667] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:00:29.343646  436648 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:00:29.343715  436648 notify.go:220] Checking for updates...
	I1017 20:00:29.349671  436648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:00:29.352669  436648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:00:29.356729  436648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:00:29.359663  436648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:00:29.362969  436648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:00:29.367033  436648 config.go:182] Loaded profile config "kubernetes-upgrade-819667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:29.367571  436648 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:00:29.403953  436648 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:00:29.404078  436648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:00:29.464965  436648 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 20:00:29.454898344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:00:29.465084  436648 docker.go:318] overlay module found
	I1017 20:00:29.468268  436648 out.go:179] * Using the docker driver based on existing profile
	I1017 20:00:29.471185  436648 start.go:305] selected driver: docker
	I1017 20:00:29.471256  436648 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-819667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-819667 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:00:29.471375  436648 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:00:29.472071  436648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:00:29.541139  436648 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 20:00:29.52559884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:00:29.541471  436648 cni.go:84] Creating CNI manager for ""
	I1017 20:00:29.541535  436648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:00:29.541571  436648 start.go:349] cluster config:
	{Name:kubernetes-upgrade-819667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-819667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:00:29.544697  436648 out.go:179] * Starting "kubernetes-upgrade-819667" primary control-plane node in "kubernetes-upgrade-819667" cluster
	I1017 20:00:29.547461  436648 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:00:29.550466  436648 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:00:29.553385  436648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:00:29.553442  436648 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:00:29.553458  436648 cache.go:58] Caching tarball of preloaded images
	I1017 20:00:29.553480  436648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:00:29.553542  436648 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:00:29.553554  436648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:00:29.553675  436648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/config.json ...
	I1017 20:00:29.574729  436648 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:00:29.574752  436648 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:00:29.574766  436648 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:00:29.574788  436648 start.go:360] acquireMachinesLock for kubernetes-upgrade-819667: {Name:mk36f903b6b98ce7786cdaf804e9cbb9cfeef883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:00:29.574848  436648 start.go:364] duration metric: took 37.505µs to acquireMachinesLock for "kubernetes-upgrade-819667"
	I1017 20:00:29.574871  436648 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:00:29.574880  436648 fix.go:54] fixHost starting: 
	I1017 20:00:29.575141  436648 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-819667 --format={{.State.Status}}
	I1017 20:00:29.593138  436648 fix.go:112] recreateIfNeeded on kubernetes-upgrade-819667: state=Running err=<nil>
	W1017 20:00:29.593171  436648 fix.go:138] unexpected machine state, will restart: <nil>
	W1017 20:00:27.751366  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	W1017 20:00:30.250507  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	I1017 20:00:29.596294  436648 out.go:252] * Updating the running docker "kubernetes-upgrade-819667" container ...
	I1017 20:00:29.596334  436648 machine.go:93] provisionDockerMachine start ...
	I1017 20:00:29.596434  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:29.615216  436648 main.go:141] libmachine: Using SSH client type: native
	I1017 20:00:29.615556  436648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1017 20:00:29.615571  436648 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:00:29.768268  436648 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-819667
	
	I1017 20:00:29.768334  436648 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-819667"
	I1017 20:00:29.768416  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:29.789262  436648 main.go:141] libmachine: Using SSH client type: native
	I1017 20:00:29.789576  436648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1017 20:00:29.789594  436648 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-819667 && echo "kubernetes-upgrade-819667" | sudo tee /etc/hostname
	I1017 20:00:29.951345  436648 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-819667
	
	I1017 20:00:29.951422  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:29.980744  436648 main.go:141] libmachine: Using SSH client type: native
	I1017 20:00:29.981053  436648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1017 20:00:29.981075  436648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-819667' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-819667/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-819667' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:00:30.181166  436648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:00:30.181193  436648 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:00:30.181235  436648 ubuntu.go:190] setting up certificates
	I1017 20:00:30.181250  436648 provision.go:84] configureAuth start
	I1017 20:00:30.181337  436648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-819667
	I1017 20:00:30.200215  436648 provision.go:143] copyHostCerts
	I1017 20:00:30.200304  436648 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:00:30.200336  436648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:00:30.200428  436648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:00:30.200626  436648 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:00:30.200643  436648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:00:30.200681  436648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:00:30.200750  436648 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:00:30.200761  436648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:00:30.200787  436648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:00:30.200846  436648 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-819667 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-819667 localhost minikube]
	I1017 20:00:30.588512  436648 provision.go:177] copyRemoteCerts
	I1017 20:00:30.588598  436648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:00:30.588649  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:30.609615  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:30.713445  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:00:30.732979  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1017 20:00:30.757254  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:00:30.779670  436648 provision.go:87] duration metric: took 598.405452ms to configureAuth
	I1017 20:00:30.779706  436648 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:00:30.779950  436648 config.go:182] Loaded profile config "kubernetes-upgrade-819667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:30.780085  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:30.797895  436648 main.go:141] libmachine: Using SSH client type: native
	I1017 20:00:30.798206  436648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1017 20:00:30.798232  436648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:00:31.481590  436648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:00:31.481615  436648 machine.go:96] duration metric: took 1.885273345s to provisionDockerMachine
	I1017 20:00:31.481626  436648 start.go:293] postStartSetup for "kubernetes-upgrade-819667" (driver="docker")
	I1017 20:00:31.481653  436648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:00:31.481753  436648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:00:31.481813  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:31.499721  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:31.604396  436648 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:00:31.607903  436648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:00:31.607930  436648 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:00:31.607941  436648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:00:31.607994  436648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:00:31.608096  436648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:00:31.608194  436648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:00:31.615659  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:00:31.633870  436648 start.go:296] duration metric: took 152.228146ms for postStartSetup
	I1017 20:00:31.633970  436648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:00:31.634061  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:31.652299  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:31.761402  436648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:00:31.772410  436648 fix.go:56] duration metric: took 2.19752229s for fixHost
	I1017 20:00:31.772433  436648 start.go:83] releasing machines lock for "kubernetes-upgrade-819667", held for 2.197572782s
	I1017 20:00:31.772515  436648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-819667
	I1017 20:00:31.791444  436648 ssh_runner.go:195] Run: cat /version.json
	I1017 20:00:31.791497  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:31.791731  436648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:00:31.791830  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:31.826498  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:31.835654  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:32.054584  436648 ssh_runner.go:195] Run: systemctl --version
	I1017 20:00:32.061562  436648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:00:32.133453  436648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:00:32.139474  436648 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:00:32.139562  436648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:00:32.149285  436648 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:00:32.149329  436648 start.go:495] detecting cgroup driver to use...
	I1017 20:00:32.149362  436648 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:00:32.149430  436648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:00:32.166030  436648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:00:32.181224  436648 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:00:32.181297  436648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:00:32.201578  436648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:00:32.234375  436648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:00:32.537342  436648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:00:32.845843  436648 docker.go:234] disabling docker service ...
	I1017 20:00:32.845917  436648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:00:32.908192  436648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:00:32.930638  436648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:00:33.308320  436648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:00:33.674035  436648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:00:33.726734  436648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:00:33.778304  436648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:00:33.778403  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.817314  436648 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:00:33.817397  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.857159  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.879735  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.912260  436648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:00:33.932003  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.954564  436648 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.981357  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:34.013306  436648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:00:34.032713  436648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:00:34.063066  436648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:00:34.460233  436648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:00:34.759556  436648 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:00:34.759636  436648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:00:34.763580  436648 start.go:563] Will wait 60s for crictl version
	I1017 20:00:34.763658  436648 ssh_runner.go:195] Run: which crictl
	I1017 20:00:34.767821  436648 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:00:34.802510  436648 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:00:34.802623  436648 ssh_runner.go:195] Run: crio --version
	I1017 20:00:34.839141  436648 ssh_runner.go:195] Run: crio --version
	I1017 20:00:34.873744  436648 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1017 20:00:32.255026  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	W1017 20:00:34.260392  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	I1017 20:00:34.876821  436648 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-819667 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:00:34.894571  436648 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 20:00:34.904039  436648 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-819667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-819667 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:00:34.904142  436648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:00:34.904192  436648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:00:34.951234  436648 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:00:34.951257  436648 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:00:34.951328  436648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:00:35.008315  436648 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:00:35.008404  436648 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:00:35.008429  436648 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 20:00:35.008655  436648 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-819667 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-819667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:00:35.008801  436648 ssh_runner.go:195] Run: crio config
	I1017 20:00:35.087473  436648 cni.go:84] Creating CNI manager for ""
	I1017 20:00:35.087549  436648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:00:35.087606  436648 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:00:35.087665  436648 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-819667 NodeName:kubernetes-upgrade-819667 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:00:35.087878  436648 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-819667"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:00:35.088000  436648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:00:35.098597  436648 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:00:35.098670  436648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:00:35.111167  436648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1017 20:00:35.129365  436648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:00:35.144872  436648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1017 20:00:35.163656  436648 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:00:35.168263  436648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:00:35.404120  436648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:00:35.431432  436648 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667 for IP: 192.168.76.2
	I1017 20:00:35.431451  436648 certs.go:195] generating shared ca certs ...
	I1017 20:00:35.431467  436648 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:00:35.431599  436648 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:00:35.431653  436648 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:00:35.431670  436648 certs.go:257] generating profile certs ...
	I1017 20:00:35.431764  436648 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/client.key
	I1017 20:00:35.431820  436648 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/apiserver.key.65ed7d0b
	I1017 20:00:35.431863  436648 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/proxy-client.key
	I1017 20:00:35.431973  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:00:35.432011  436648 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:00:35.432024  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:00:35.432050  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:00:35.432077  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:00:35.432103  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:00:35.432148  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:00:35.432836  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:00:35.462263  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:00:35.483011  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:00:35.513533  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:00:35.535066  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1017 20:00:35.555109  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:00:35.574339  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:00:35.593822  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:00:35.613354  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:00:35.633081  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:00:35.651418  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:00:35.673204  436648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:00:35.690442  436648 ssh_runner.go:195] Run: openssl version
	I1017 20:00:35.697797  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:00:35.706957  436648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:00:35.713108  436648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:00:35.713174  436648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:00:35.757442  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:00:35.765628  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:00:35.774316  436648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:00:35.777988  436648 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:00:35.778099  436648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:00:35.818886  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:00:35.826914  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:00:35.835052  436648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:00:35.839620  436648 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:00:35.839697  436648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:00:35.880815  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:00:35.889008  436648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:00:35.892884  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:00:35.938894  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:00:35.980667  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:00:36.023967  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:00:36.066645  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:00:36.108513  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:00:36.152397  436648 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-819667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-819667 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:00:36.152489  436648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:00:36.152598  436648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:00:36.184259  436648 cri.go:89] found id: "dc254c9104ea046e16568206f8b0d4df2bfc3770d9d2523464d2a40ef2e2a621"
	I1017 20:00:36.184287  436648 cri.go:89] found id: "70efaa70a9ae2e51b25cebc8a4343491bc98c86c779ebdec15652b01e51591e5"
	I1017 20:00:36.184292  436648 cri.go:89] found id: "4dea6c2decbc8746b522b740738bc882cf2b475a98a3b7772145843eeee4dcdc"
	I1017 20:00:36.184296  436648 cri.go:89] found id: "dddc349eafb6f3c3decae0a4fe1a77955eaf4766dc92b5de514139343894a4e1"
	I1017 20:00:36.184300  436648 cri.go:89] found id: ""
	I1017 20:00:36.184352  436648 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:00:36.196124  436648 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:36Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:00:36.196245  436648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:00:36.205925  436648 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:00:36.205999  436648 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:00:36.206090  436648 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:00:36.215450  436648 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:00:36.216205  436648 kubeconfig.go:125] found "kubernetes-upgrade-819667" server: "https://192.168.76.2:8443"
	I1017 20:00:36.217110  436648 kapi.go:59] client config for kubernetes-upgrade-819667: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:00:36.217605  436648 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 20:00:36.217625  436648 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 20:00:36.217631  436648 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 20:00:36.217636  436648 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 20:00:36.217642  436648 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 20:00:36.217913  436648 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:00:36.225877  436648 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 20:00:36.225914  436648 kubeadm.go:601] duration metric: took 19.896195ms to restartPrimaryControlPlane
	I1017 20:00:36.225924  436648 kubeadm.go:402] duration metric: took 73.537472ms to StartCluster
	I1017 20:00:36.225961  436648 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:00:36.226042  436648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:00:36.226977  436648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:00:36.227264  436648 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:00:36.227585  436648 config.go:182] Loaded profile config "kubernetes-upgrade-819667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:36.227733  436648 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:00:36.227806  436648 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-819667"
	I1017 20:00:36.227827  436648 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-819667"
	W1017 20:00:36.227837  436648 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:00:36.227858  436648 host.go:66] Checking if "kubernetes-upgrade-819667" exists ...
	I1017 20:00:36.228680  436648 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-819667 --format={{.State.Status}}
	I1017 20:00:36.228861  436648 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-819667"
	I1017 20:00:36.228907  436648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-819667"
	I1017 20:00:36.229201  436648 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-819667 --format={{.State.Status}}
	I1017 20:00:36.232985  436648 out.go:179] * Verifying Kubernetes components...
	I1017 20:00:36.236355  436648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:00:36.257036  436648 kapi.go:59] client config for kubernetes-upgrade-819667: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:00:36.259649  436648 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-819667"
	W1017 20:00:36.259671  436648 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:00:36.259710  436648 host.go:66] Checking if "kubernetes-upgrade-819667" exists ...
	I1017 20:00:36.260169  436648 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-819667 --format={{.State.Status}}
	I1017 20:00:36.275926  436648 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:00:36.278901  436648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:00:36.278926  436648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:00:36.279000  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:36.304247  436648 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:00:36.304278  436648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:00:36.304338  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:36.325702  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:36.344646  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:36.483401  436648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:00:36.500325  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:00:36.511090  436648 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:00:36.511234  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:36.519643  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1017 20:00:36.623932  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:36.623976  436648 retry.go:31] will retry after 319.762769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 20:00:36.625790  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:36.625816  436648 retry.go:31] will retry after 290.496875ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:36.917307  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:00:36.944802  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:00:37.012275  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1017 20:00:37.032415  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.032452  436648 retry.go:31] will retry after 249.987699ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 20:00:37.067202  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.067277  436648 retry.go:31] will retry after 228.257388ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.283514  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:00:37.295907  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1017 20:00:37.367703  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.367784  436648 retry.go:31] will retry after 744.067547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 20:00:37.372298  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.372331  436648 retry.go:31] will retry after 400.825746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.511450  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:37.774148  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1017 20:00:37.837688  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.837717  436648 retry.go:31] will retry after 600.454251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:38.012021  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:38.112793  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1017 20:00:38.174932  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:38.174989  436648 retry.go:31] will retry after 913.935237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:38.439335  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1017 20:00:38.501158  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:38.501201  436648 retry.go:31] will retry after 1.41529173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:38.512304  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:39.011403  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:39.089464  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1017 20:00:39.148772  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:39.148806  436648 retry.go:31] will retry after 1.525469269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 20:00:36.750640  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	W1017 20:00:39.249700  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	W1017 20:00:41.250156  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	I1017 20:00:41.750661  433027 pod_ready.go:94] pod "kube-controller-manager-pause-217784" is "Ready"
	I1017 20:00:41.750693  433027 pod_ready.go:86] duration metric: took 42.505884337s for pod "kube-controller-manager-pause-217784" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:41.752984  433027 pod_ready.go:83] waiting for pod "kube-proxy-zt258" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:41.757741  433027 pod_ready.go:94] pod "kube-proxy-zt258" is "Ready"
	I1017 20:00:41.757764  433027 pod_ready.go:86] duration metric: took 4.756545ms for pod "kube-proxy-zt258" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:41.759846  433027 pod_ready.go:83] waiting for pod "kube-scheduler-pause-217784" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:41.764381  433027 pod_ready.go:94] pod "kube-scheduler-pause-217784" is "Ready"
	I1017 20:00:41.764407  433027 pod_ready.go:86] duration metric: took 4.53566ms for pod "kube-scheduler-pause-217784" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:41.764421  433027 pod_ready.go:40] duration metric: took 51.043627718s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:00:41.826919  433027 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:00:41.830113  433027 out.go:179] * Done! kubectl is now configured to use "pause-217784" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.211841352Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.211874426Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.215156517Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.215192717Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.21521423Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.218496149Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.218527919Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.21854935Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.221637683Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.221668648Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.221692959Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.225386034Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.225417492Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.225439046Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.228559049Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.22859045Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.749789214Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=de418f21-b11f-4d79-bd1c-e821f2fb8951 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.751923341Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=f219eb93-ca08-4bb6-af9f-e494bf86dc5d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.753013781Z" level=info msg="Creating container: kube-system/kube-controller-manager-pause-217784/kube-controller-manager" id=8c6b5b19-71d5-4960-b8f6-61e51d7dda5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.753237447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.765893325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.766691088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.801990925Z" level=info msg="Created container 0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006: kube-system/kube-controller-manager-pause-217784/kube-controller-manager" id=8c6b5b19-71d5-4960-b8f6-61e51d7dda5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.807234898Z" level=info msg="Starting container: 0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006" id=49042e59-1f34-48b3-977c-c53e51ac78e0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.818590464Z" level=info msg="Started container" PID=2784 containerID=0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006 description=kube-system/kube-controller-manager-pause-217784/kube-controller-manager id=49042e59-1f34-48b3-977c-c53e51ac78e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=99ceeba29f6c0cfb23dfe4cc17d9c05a9df753ff75ea6c8007be87ad9cdb5105
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0e7006964e34f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago       Running             kube-controller-manager   3                   99ceeba29f6c0       kube-controller-manager-pause-217784   kube-system
	62588c2119e6d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   59 seconds ago       Running             etcd                      2                   ac418e8ede010       etcd-pause-217784                      kube-system
	f9b2cb8da0165       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Running             coredns                   2                   2f371a5cc2c3a       coredns-66bc5c9577-g5z7h               kube-system
	96b630dc738ba       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Running             kube-scheduler            2                   cd9ffa2516d01       kube-scheduler-pause-217784            kube-system
	ac7ec3d90033a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Running             kube-apiserver            2                   e1321335f7701       kube-apiserver-pause-217784            kube-system
	3fd4475a37d18       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Running             kube-proxy                2                   a1aba6eb6b84d       kube-proxy-zt258                       kube-system
	674ea3bea7ff9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Running             kindnet-cni               2                   dbcebc531e2d1       kindnet-46jpk                          kube-system
	35101c6831df1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   2                   99ceeba29f6c0       kube-controller-manager-pause-217784   kube-system
	f65fdf97b4d90       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            1                   cd9ffa2516d01       kube-scheduler-pause-217784            kube-system
	d3af5d8cf3e85       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            1                   e1321335f7701       kube-apiserver-pause-217784            kube-system
	a2dfb5e26ac71       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               1                   dbcebc531e2d1       kindnet-46jpk                          kube-system
	e00ec46155335       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      1                   ac418e8ede010       etcd-pause-217784                      kube-system
	b5d8399275d88       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Exited              coredns                   1                   2f371a5cc2c3a       coredns-66bc5c9577-g5z7h               kube-system
	8142d317a44bb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                1                   a1aba6eb6b84d       kube-proxy-zt258                       kube-system
	
	
	==> coredns [b5d8399275d880bb3281f1eef3884a684e6c9909d2b4a7142a465337ebb920e3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51058 - 48861 "HINFO IN 8856667112206479484.8285220164548607494. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02605722s
	
	
	==> coredns [f9b2cb8da0165e5e84d72b243c5e3fd7d4e8e1dc2acf5e407090f93c881f74d2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39749 - 8425 "HINFO IN 867509571052015759.8496768120052480553. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.031894302s
	
	
	==> describe nodes <==
	Name:               pause-217784
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-217784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=pause-217784
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_58_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:58:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-217784
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:00:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:00:36 +0000   Fri, 17 Oct 2025 19:58:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:00:36 +0000   Fri, 17 Oct 2025 19:58:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:00:36 +0000   Fri, 17 Oct 2025 19:58:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:00:36 +0000   Fri, 17 Oct 2025 19:58:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-217784
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2fa317a1-f859-4a8f-a0f8-dd31253c3cc3
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-g5z7h                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 etcd-pause-217784                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m32s
	  kube-system                 kindnet-46jpk                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-pause-217784             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-pause-217784    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-zt258                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-pause-217784             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m26s  kube-proxy       
	  Normal   Starting                 55s    kube-proxy       
	  Normal   Starting                 2m32s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s  kubelet          Node pause-217784 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s  kubelet          Node pause-217784 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s  kubelet          Node pause-217784 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m28s  node-controller  Node pause-217784 event: Registered Node pause-217784 in Controller
	  Normal   NodeReady                106s   kubelet          Node pause-217784 status is now: NodeReady
	  Warning  ContainerGCFailed        92s    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8s     node-controller  Node pause-217784 event: Registered Node pause-217784 in Controller
	
	
	==> dmesg <==
	[  +4.119232] overlayfs: idmapped layers are currently not supported
	[Oct17 19:32] overlayfs: idmapped layers are currently not supported
	[  +2.727676] overlayfs: idmapped layers are currently not supported
	[ +41.644994] overlayfs: idmapped layers are currently not supported
	[Oct17 19:33] overlayfs: idmapped layers are currently not supported
	[Oct17 19:34] overlayfs: idmapped layers are currently not supported
	[Oct17 19:36] overlayfs: idmapped layers are currently not supported
	[Oct17 19:41] overlayfs: idmapped layers are currently not supported
	[ +34.896999] overlayfs: idmapped layers are currently not supported
	[Oct17 19:42] overlayfs: idmapped layers are currently not supported
	[Oct17 19:43] overlayfs: idmapped layers are currently not supported
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [62588c2119e6d0606bb21463fe47ed8567945aaf80b48732876e92bd3aac6d3c] <==
	{"level":"warn","ts":"2025-10-17T19:59:47.845871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.867500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.887496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.913897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.933760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.949394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.962251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.981084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.996303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.025270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.064032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.084982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.102189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.119576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.138730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.169741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.187588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.201513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.240705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.281155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.302794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.337101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.354893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.379327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.454559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49446","server-name":"","error":"EOF"}
	
	
	==> etcd [e00ec461553354a63089e70d55be3852e68c0e75fb8407e6ddbd77706f937bb5] <==
	{"level":"info","ts":"2025-10-17T19:59:09.744762Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-10-17T19:59:09.762341Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-17T19:59:09.763277Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-17T19:59:09.764634Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T19:59:09.764673Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-17T19:59:09.764988Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-17T19:59:09.794012Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T19:59:10.119706Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-17T19:59:10.119770Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-217784","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-17T19:59:10.119897Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T19:59:10.123237Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T19:59:10.123311Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:59:10.123331Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-17T19:59:10.123413Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-17T19:59:10.123433Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-17T19:59:10.123633Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T19:59:10.123651Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T19:59:10.123662Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-17T19:59:10.123585Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T19:59:10.123699Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T19:59:10.123705Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:59:10.133295Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-17T19:59:10.133396Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:59:10.133446Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-17T19:59:10.133453Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-217784","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 20:00:44 up  2:43,  0 user,  load average: 2.40, 2.64, 2.31
	Linux pause-217784 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [674ea3bea7ff943a48ca4af34bde9cc6f0e26dd205525997435f1c2327b22556] <==
	E1017 19:59:35.208947       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 19:59:35.209139       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 19:59:35.209302       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 19:59:35.209447       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 19:59:36.100375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 19:59:36.525901       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 19:59:36.544734       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 19:59:36.789268       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 19:59:38.407582       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 19:59:38.533430       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 19:59:39.257883       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 19:59:39.584469       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1017 19:59:49.308361       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:59:49.308465       1 metrics.go:72] Registering metrics
	I1017 19:59:49.308943       1 controller.go:711] "Syncing nftables rules"
	I1017 19:59:55.207596       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:59:55.207658       1 main.go:301] handling current node
	I1017 20:00:05.207297       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:00:05.207334       1 main.go:301] handling current node
	I1017 20:00:15.212572       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:00:15.212684       1 main.go:301] handling current node
	I1017 20:00:25.211580       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:00:25.211612       1 main.go:301] handling current node
	I1017 20:00:35.207658       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:00:35.208040       1 main.go:301] handling current node
	
	
	==> kindnet [a2dfb5e26ac71f5212fffeb91e67e0e371348b88a23fa9cba8152e7f4ac1cc12] <==
	I1017 19:59:09.318912       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:59:09.319165       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 19:59:09.319298       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:59:09.319309       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:59:09.319319       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:59:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:59:09.601225       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:59:09.601317       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:59:09.601351       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:59:09.601708       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [ac7ec3d90033a6dde87d8b2bc23b9d6e5c887a94e0db8a34e9e454c1ad12f17a] <==
	I1017 19:59:49.096205       1 shared_informer.go:349] "Waiting for caches to sync" controller="kubernetes-service-cidr-controller"
	I1017 19:59:49.255255       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:59:49.261760       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:59:49.261854       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:59:49.268351       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:59:49.268572       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 19:59:49.268666       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 19:59:49.269670       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 19:59:49.269755       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 19:59:49.269801       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:59:49.286652       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:59:49.286856       1 policy_source.go:240] refreshing policies
	I1017 19:59:49.296257       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:59:49.296306       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:59:49.301225       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:59:49.301969       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 19:59:49.302609       1 aggregator.go:171] initial CRD sync complete...
	I1017 19:59:49.302673       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 19:59:49.302703       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:59:49.302733       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:59:49.316493       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1017 19:59:49.319187       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:59:49.334864       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:59:49.970994       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:00:35.577465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-apiserver [d3af5d8cf3e85823f42bfa25e4df0cbc4644772954310529fa40dc6570250b0c] <==
	I1017 19:59:09.469351       1 options.go:263] external host was not specified, using 192.168.85.2
	I1017 19:59:09.477925       1 server.go:150] Version: v1.34.1
	I1017 19:59:09.478040       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006] <==
	I1017 20:00:36.931854       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:00:36.934208       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:00:36.934278       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:00:36.934311       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:00:36.937567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:00:36.939049       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:00:36.943615       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:00:36.947647       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:00:36.950054       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:00:36.951722       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 20:00:36.959239       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:00:36.959360       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:00:36.959442       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:00:36.959542       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:00:36.959758       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:00:36.959849       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 20:00:36.959989       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:00:36.960010       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:00:36.960895       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:00:36.960393       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:00:36.961219       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-217784"
	I1017 20:00:36.961308       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:00:36.965682       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:00:36.965822       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:00:36.967995       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-controller-manager [35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3] <==
	I1017 19:59:31.944793       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:59:32.571990       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 19:59:32.572022       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:59:32.573529       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 19:59:32.573714       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1017 19:59:32.573929       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 19:59:32.573979       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 19:59:49.201600       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [3fd4475a37d18c00aec1ef703d573e6e5fb6655507ad68d1fca8ae80ede45d04] <==
	I1017 19:59:35.821758       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:59:35.910035       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1017 19:59:35.910889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-217784&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:59:36.913130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-217784&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:59:39.231732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-217784&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1017 19:59:49.318361       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:59:49.318456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 19:59:49.318605       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:59:49.347600       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:59:49.347651       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:59:49.358248       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:59:49.358604       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:59:49.358806       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:59:49.360495       1 config.go:200] "Starting service config controller"
	I1017 19:59:49.360575       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:59:49.360628       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:59:49.360683       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:59:49.360726       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:59:49.360763       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:59:49.361393       1 config.go:309] "Starting node config controller"
	I1017 19:59:49.361991       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:59:49.362069       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:59:49.462411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:59:49.467470       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:59:49.476599       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [8142d317a44bbe309ab0386847561af9cc42546e023be675b04898c245530117] <==
	I1017 19:59:09.376608       1 server_linux.go:53] "Using iptables proxy"
	
	
	==> kube-scheduler [96b630dc738baaa3ae91f61e89650eaff48265721a8893be95ca1c3b57d64c6e] <==
	I1017 19:59:43.289805       1 serving.go:386] Generated self-signed cert in-memory
	W1017 19:59:49.204640       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:59:49.204745       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 19:59:49.204778       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:59:49.204818       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:59:49.256876       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:59:49.256980       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:59:49.265760       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:59:49.265937       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:59:49.268613       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:59:49.268704       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:59:49.368970       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f65fdf97b4d906f7856f0df6988ecb8924864dd7377a0f64601e508eb40b7458] <==
	
	
	==> kubelet <==
	Oct 17 19:59:43 pause-217784 kubelet[1309]: W1017 19:59:43.008612    1309 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 17 19:59:44 pause-217784 kubelet[1309]: I1017 19:59:44.751290    1309 scope.go:117] "RemoveContainer" containerID="e00ec461553354a63089e70d55be3852e68c0e75fb8407e6ddbd77706f937bb5"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.035019    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-217784\" is forbidden: User \"system:node:pause-217784\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" podUID="8e0af0c855e8d2c1fffbec063d7c38ca" pod="kube-system/kube-scheduler-pause-217784"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.035526    1309 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-217784\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.079906    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-217784\" is forbidden: User \"system:node:pause-217784\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" podUID="adbf34715131edf4a0adf073cdfefb0d" pod="kube-system/etcd-pause-217784"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.201970    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-46jpk\" is forbidden: User \"system:node:pause-217784\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" podUID="03412f46-522b-4ba3-8a9d-f1453429ea60" pod="kube-system/kindnet-46jpk"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.234676    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-zt258\" is forbidden: User \"system:node:pause-217784\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" podUID="b2ec80fc-103f-4e5d-a8d8-ba147dc8c2df" pod="kube-system/kube-proxy-zt258"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.249916    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-g5z7h\" is forbidden: User \"system:node:pause-217784\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" podUID="aa9d1acc-4613-468b-af2d-c1907dd42048" pod="kube-system/coredns-66bc5c9577-g5z7h"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: I1017 19:59:49.280285    1309 scope.go:117] "RemoveContainer" containerID="8a2e0c0e3cf515d3df5cb05835c9998c9772491a9626eb43759688eabe46cd3d"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: I1017 19:59:49.285299    1309 scope.go:117] "RemoveContainer" containerID="35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.285537    1309 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-217784_kube-system(3bb112e926c4becd104d4b9d920bd51e)\"" pod="kube-system/kube-controller-manager-pause-217784" podUID="3bb112e926c4becd104d4b9d920bd51e"
	Oct 17 19:59:52 pause-217784 kubelet[1309]: I1017 19:59:52.636111    1309 scope.go:117] "RemoveContainer" containerID="35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	Oct 17 19:59:52 pause-217784 kubelet[1309]: E1017 19:59:52.636775    1309 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-217784_kube-system(3bb112e926c4becd104d4b9d920bd51e)\"" pod="kube-system/kube-controller-manager-pause-217784" podUID="3bb112e926c4becd104d4b9d920bd51e"
	Oct 17 20:00:02 pause-217784 kubelet[1309]: I1017 20:00:02.750298    1309 scope.go:117] "RemoveContainer" containerID="35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	Oct 17 20:00:02 pause-217784 kubelet[1309]: E1017 20:00:02.750462    1309 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-217784_kube-system(3bb112e926c4becd104d4b9d920bd51e)\"" pod="kube-system/kube-controller-manager-pause-217784" podUID="3bb112e926c4becd104d4b9d920bd51e"
	Oct 17 20:00:12 pause-217784 kubelet[1309]: E1017 20:00:12.734582    1309 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7a74ddc9c1829da1c91cce2f0c341a07b58d5109d1055c0a5979517ae088341b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7a74ddc9c1829da1c91cce2f0c341a07b58d5109d1055c0a5979517ae088341b/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-pause-217784_d71e9b68718e4348632412d448990b5b/kube-apiserver/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-pause-217784_d71e9b68718e4348632412d448990b5b/kube-apiserver/0.log: no such file or directory
	Oct 17 20:00:12 pause-217784 kubelet[1309]: E1017 20:00:12.740895    1309 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a632040ca92db85595274657ce7649f0263caeb3670f0fe9def9dc496cb56aef/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a632040ca92db85595274657ce7649f0263caeb3670f0fe9def9dc496cb56aef/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_etcd-pause-217784_adbf34715131edf4a0adf073cdfefb0d/etcd/0.log" to get inode usage: stat /var/log/pods/kube-system_etcd-pause-217784_adbf34715131edf4a0adf073cdfefb0d/etcd/0.log: no such file or directory
	Oct 17 20:00:12 pause-217784 kubelet[1309]: E1017 20:00:12.767101    1309 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5a99844d65cad5e8b9dbe26d7e176134333c146fc29eb4908864bee5564bf424/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5a99844d65cad5e8b9dbe26d7e176134333c146fc29eb4908864bee5564bf424/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-scheduler-pause-217784_8e0af0c855e8d2c1fffbec063d7c38ca/kube-scheduler/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-scheduler-pause-217784_8e0af0c855e8d2c1fffbec063d7c38ca/kube-scheduler/0.log: no such file or directory
	Oct 17 20:00:12 pause-217784 kubelet[1309]: E1017 20:00:12.776547    1309 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5104298c8460fc62473f9597602ca69d7422488f68e6061340909086204f0737/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5104298c8460fc62473f9597602ca69d7422488f68e6061340909086204f0737/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-pause-217784_3bb112e926c4becd104d4b9d920bd51e/kube-controller-manager/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-pause-217784_3bb112e926c4becd104d4b9d920bd51e/kube-controller-manager/0.log: no such file or directory
	Oct 17 20:00:17 pause-217784 kubelet[1309]: I1017 20:00:17.748802    1309 scope.go:117] "RemoveContainer" containerID="35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	Oct 17 20:00:17 pause-217784 kubelet[1309]: E1017 20:00:17.749706    1309 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-217784_kube-system(3bb112e926c4becd104d4b9d920bd51e)\"" pod="kube-system/kube-controller-manager-pause-217784" podUID="3bb112e926c4becd104d4b9d920bd51e"
	Oct 17 20:00:31 pause-217784 kubelet[1309]: I1017 20:00:31.748497    1309 scope.go:117] "RemoveContainer" containerID="35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	Oct 17 20:00:42 pause-217784 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:00:42 pause-217784 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:00:42 pause-217784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-217784 -n pause-217784
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-217784 -n pause-217784: exit status 2 (367.906669ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-217784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-217784
helpers_test.go:243: (dbg) docker inspect pause-217784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653",
	        "Created": "2025-10-17T19:57:47.297819224Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 428896,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:57:47.373128715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653/hostname",
	        "HostsPath": "/var/lib/docker/containers/ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653/hosts",
	        "LogPath": "/var/lib/docker/containers/ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653/ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653-json.log",
	        "Name": "/pause-217784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-217784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-217784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ab010eed84dcf554a449938dc51096864915d30b6c8fe732d7efad8f59793653",
	                "LowerDir": "/var/lib/docker/overlay2/a72cb925ebcd3ece39dae78f951907d69cb82d05155d243ef98d68b95e77f716-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a72cb925ebcd3ece39dae78f951907d69cb82d05155d243ef98d68b95e77f716/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a72cb925ebcd3ece39dae78f951907d69cb82d05155d243ef98d68b95e77f716/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a72cb925ebcd3ece39dae78f951907d69cb82d05155d243ef98d68b95e77f716/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-217784",
	                "Source": "/var/lib/docker/volumes/pause-217784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-217784",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-217784",
	                "name.minikube.sigs.k8s.io": "pause-217784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "248746af21cdd29bb6e8f897f18b9cf6f18c72db05e809a6c275b1eaa13f3461",
	            "SandboxKey": "/var/run/docker/netns/248746af21cd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-217784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:42:30:8f:2d:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2a339a9ea8f3c55549b9d606422f7421496e172da373a40f46136b43005fd030",
	                    "EndpointID": "64a6d668ce0df35fc385d7ae1b02a527ccfc2f8dd97b56074b046a03bea7c883",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-217784",
	                        "ab010eed84dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-217784 -n pause-217784
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-217784 -n pause-217784: exit status 2 (355.573547ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-217784 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-217784 logs -n 25: (1.338947283s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-731142 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:54 UTC │
	│ delete  │ -p NoKubernetes-731142                                                                                                                   │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:54 UTC │
	│ start   │ -p NoKubernetes-731142 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:54 UTC │
	│ ssh     │ -p NoKubernetes-731142 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │                     │
	│ start   │ -p missing-upgrade-672083 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-672083    │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:55 UTC │
	│ stop    │ -p NoKubernetes-731142                                                                                                                   │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:54 UTC │
	│ start   │ -p NoKubernetes-731142 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:54 UTC │ 17 Oct 25 19:55 UTC │
	│ ssh     │ -p NoKubernetes-731142 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │                     │
	│ delete  │ -p NoKubernetes-731142                                                                                                                   │ NoKubernetes-731142       │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 19:55 UTC │
	│ start   │ -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-819667 │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 19:55 UTC │
	│ delete  │ -p missing-upgrade-672083                                                                                                                │ missing-upgrade-672083    │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 19:55 UTC │
	│ start   │ -p stopped-upgrade-771448 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-771448    │ jenkins │ v1.32.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 19:56 UTC │
	│ stop    │ -p kubernetes-upgrade-819667                                                                                                             │ kubernetes-upgrade-819667 │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 19:55 UTC │
	│ start   │ -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-819667 │ jenkins │ v1.37.0 │ 17 Oct 25 19:55 UTC │ 17 Oct 25 20:00 UTC │
	│ stop    │ stopped-upgrade-771448 stop                                                                                                              │ stopped-upgrade-771448    │ jenkins │ v1.32.0 │ 17 Oct 25 19:56 UTC │ 17 Oct 25 19:56 UTC │
	│ start   │ -p stopped-upgrade-771448 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-771448    │ jenkins │ v1.37.0 │ 17 Oct 25 19:56 UTC │ 17 Oct 25 19:56 UTC │
	│ delete  │ -p stopped-upgrade-771448                                                                                                                │ stopped-upgrade-771448    │ jenkins │ v1.37.0 │ 17 Oct 25 19:56 UTC │ 17 Oct 25 19:56 UTC │
	│ start   │ -p running-upgrade-866281 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-866281    │ jenkins │ v1.32.0 │ 17 Oct 25 19:56 UTC │ 17 Oct 25 19:57 UTC │
	│ start   │ -p running-upgrade-866281 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-866281    │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ delete  │ -p running-upgrade-866281                                                                                                                │ running-upgrade-866281    │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ start   │ -p pause-217784 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-217784              │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:59 UTC │
	│ start   │ -p pause-217784 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-217784              │ jenkins │ v1.37.0 │ 17 Oct 25 19:59 UTC │ 17 Oct 25 20:00 UTC │
	│ start   │ -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-819667 │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ start   │ -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-819667 │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ pause   │ -p pause-217784 --alsologtostderr -v=5                                                                                                   │ pause-217784              │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:00:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:00:29.334367  436648 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:00:29.334523  436648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:29.334536  436648 out.go:374] Setting ErrFile to fd 2...
	I1017 20:00:29.334569  436648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:29.334869  436648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:00:29.336199  436648 out.go:368] Setting JSON to false
	I1017 20:00:29.337356  436648 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9780,"bootTime":1760721449,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:00:29.337431  436648 start.go:141] virtualization:  
	I1017 20:00:29.340642  436648 out.go:179] * [kubernetes-upgrade-819667] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:00:29.343646  436648 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:00:29.343715  436648 notify.go:220] Checking for updates...
	I1017 20:00:29.349671  436648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:00:29.352669  436648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:00:29.356729  436648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:00:29.359663  436648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:00:29.362969  436648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:00:29.367033  436648 config.go:182] Loaded profile config "kubernetes-upgrade-819667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:29.367571  436648 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:00:29.403953  436648 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:00:29.404078  436648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:00:29.464965  436648 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 20:00:29.454898344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:00:29.465084  436648 docker.go:318] overlay module found
	I1017 20:00:29.468268  436648 out.go:179] * Using the docker driver based on existing profile
	I1017 20:00:29.471185  436648 start.go:305] selected driver: docker
	I1017 20:00:29.471256  436648 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-819667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-819667 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:00:29.471375  436648 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:00:29.472071  436648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:00:29.541139  436648 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 20:00:29.52559884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:00:29.541471  436648 cni.go:84] Creating CNI manager for ""
	I1017 20:00:29.541535  436648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:00:29.541571  436648 start.go:349] cluster config:
	{Name:kubernetes-upgrade-819667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-819667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:00:29.544697  436648 out.go:179] * Starting "kubernetes-upgrade-819667" primary control-plane node in "kubernetes-upgrade-819667" cluster
	I1017 20:00:29.547461  436648 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:00:29.550466  436648 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:00:29.553385  436648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:00:29.553442  436648 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:00:29.553458  436648 cache.go:58] Caching tarball of preloaded images
	I1017 20:00:29.553480  436648 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:00:29.553542  436648 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:00:29.553554  436648 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:00:29.553675  436648 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/config.json ...
	I1017 20:00:29.574729  436648 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:00:29.574752  436648 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:00:29.574766  436648 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:00:29.574788  436648 start.go:360] acquireMachinesLock for kubernetes-upgrade-819667: {Name:mk36f903b6b98ce7786cdaf804e9cbb9cfeef883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:00:29.574848  436648 start.go:364] duration metric: took 37.505µs to acquireMachinesLock for "kubernetes-upgrade-819667"
	I1017 20:00:29.574871  436648 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:00:29.574880  436648 fix.go:54] fixHost starting: 
	I1017 20:00:29.575141  436648 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-819667 --format={{.State.Status}}
	I1017 20:00:29.593138  436648 fix.go:112] recreateIfNeeded on kubernetes-upgrade-819667: state=Running err=<nil>
	W1017 20:00:29.593171  436648 fix.go:138] unexpected machine state, will restart: <nil>
	W1017 20:00:27.751366  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	W1017 20:00:30.250507  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	I1017 20:00:29.596294  436648 out.go:252] * Updating the running docker "kubernetes-upgrade-819667" container ...
	I1017 20:00:29.596334  436648 machine.go:93] provisionDockerMachine start ...
	I1017 20:00:29.596434  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:29.615216  436648 main.go:141] libmachine: Using SSH client type: native
	I1017 20:00:29.615556  436648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1017 20:00:29.615571  436648 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:00:29.768268  436648 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-819667
	
	I1017 20:00:29.768334  436648 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-819667"
	I1017 20:00:29.768416  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:29.789262  436648 main.go:141] libmachine: Using SSH client type: native
	I1017 20:00:29.789576  436648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1017 20:00:29.789594  436648 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-819667 && echo "kubernetes-upgrade-819667" | sudo tee /etc/hostname
	I1017 20:00:29.951345  436648 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-819667
	
	I1017 20:00:29.951422  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:29.980744  436648 main.go:141] libmachine: Using SSH client type: native
	I1017 20:00:29.981053  436648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1017 20:00:29.981075  436648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-819667' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-819667/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-819667' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:00:30.181166  436648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:00:30.181193  436648 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:00:30.181235  436648 ubuntu.go:190] setting up certificates
	I1017 20:00:30.181250  436648 provision.go:84] configureAuth start
	I1017 20:00:30.181337  436648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-819667
	I1017 20:00:30.200215  436648 provision.go:143] copyHostCerts
	I1017 20:00:30.200304  436648 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:00:30.200336  436648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:00:30.200428  436648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:00:30.200626  436648 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:00:30.200643  436648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:00:30.200681  436648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:00:30.200750  436648 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:00:30.200761  436648 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:00:30.200787  436648 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:00:30.200846  436648 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-819667 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-819667 localhost minikube]
	I1017 20:00:30.588512  436648 provision.go:177] copyRemoteCerts
	I1017 20:00:30.588598  436648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:00:30.588649  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:30.609615  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:30.713445  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:00:30.732979  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1017 20:00:30.757254  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:00:30.779670  436648 provision.go:87] duration metric: took 598.405452ms to configureAuth
	I1017 20:00:30.779706  436648 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:00:30.779950  436648 config.go:182] Loaded profile config "kubernetes-upgrade-819667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:30.780085  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:30.797895  436648 main.go:141] libmachine: Using SSH client type: native
	I1017 20:00:30.798206  436648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33369 <nil> <nil>}
	I1017 20:00:30.798232  436648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:00:31.481590  436648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:00:31.481615  436648 machine.go:96] duration metric: took 1.885273345s to provisionDockerMachine
	I1017 20:00:31.481626  436648 start.go:293] postStartSetup for "kubernetes-upgrade-819667" (driver="docker")
	I1017 20:00:31.481653  436648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:00:31.481753  436648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:00:31.481813  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:31.499721  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:31.604396  436648 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:00:31.607903  436648 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:00:31.607930  436648 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:00:31.607941  436648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:00:31.607994  436648 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:00:31.608096  436648 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:00:31.608194  436648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:00:31.615659  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:00:31.633870  436648 start.go:296] duration metric: took 152.228146ms for postStartSetup
	I1017 20:00:31.633970  436648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:00:31.634061  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:31.652299  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:31.761402  436648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:00:31.772410  436648 fix.go:56] duration metric: took 2.19752229s for fixHost
	I1017 20:00:31.772433  436648 start.go:83] releasing machines lock for "kubernetes-upgrade-819667", held for 2.197572782s
	I1017 20:00:31.772515  436648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-819667
	I1017 20:00:31.791444  436648 ssh_runner.go:195] Run: cat /version.json
	I1017 20:00:31.791497  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:31.791731  436648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:00:31.791830  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:31.826498  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:31.835654  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:32.054584  436648 ssh_runner.go:195] Run: systemctl --version
	I1017 20:00:32.061562  436648 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:00:32.133453  436648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:00:32.139474  436648 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:00:32.139562  436648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:00:32.149285  436648 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:00:32.149329  436648 start.go:495] detecting cgroup driver to use...
	I1017 20:00:32.149362  436648 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:00:32.149430  436648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:00:32.166030  436648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:00:32.181224  436648 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:00:32.181297  436648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:00:32.201578  436648 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:00:32.234375  436648 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:00:32.537342  436648 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:00:32.845843  436648 docker.go:234] disabling docker service ...
	I1017 20:00:32.845917  436648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:00:32.908192  436648 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:00:32.930638  436648 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:00:33.308320  436648 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:00:33.674035  436648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:00:33.726734  436648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:00:33.778304  436648 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:00:33.778403  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.817314  436648 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:00:33.817397  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.857159  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.879735  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.912260  436648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:00:33.932003  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.954564  436648 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:33.981357  436648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:00:34.013306  436648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:00:34.032713  436648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:00:34.063066  436648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:00:34.460233  436648 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:00:34.759556  436648 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:00:34.759636  436648 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:00:34.763580  436648 start.go:563] Will wait 60s for crictl version
	I1017 20:00:34.763658  436648 ssh_runner.go:195] Run: which crictl
	I1017 20:00:34.767821  436648 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:00:34.802510  436648 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:00:34.802623  436648 ssh_runner.go:195] Run: crio --version
	I1017 20:00:34.839141  436648 ssh_runner.go:195] Run: crio --version
	I1017 20:00:34.873744  436648 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1017 20:00:32.255026  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	W1017 20:00:34.260392  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	I1017 20:00:34.876821  436648 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-819667 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:00:34.894571  436648 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 20:00:34.904039  436648 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-819667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-819667 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:00:34.904142  436648 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:00:34.904192  436648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:00:34.951234  436648 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:00:34.951257  436648 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:00:34.951328  436648 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:00:35.008315  436648 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:00:35.008404  436648 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:00:35.008429  436648 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 20:00:35.008655  436648 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-819667 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-819667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:00:35.008801  436648 ssh_runner.go:195] Run: crio config
	I1017 20:00:35.087473  436648 cni.go:84] Creating CNI manager for ""
	I1017 20:00:35.087549  436648 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:00:35.087606  436648 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:00:35.087665  436648 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-819667 NodeName:kubernetes-upgrade-819667 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:00:35.087878  436648 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-819667"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:00:35.088000  436648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:00:35.098597  436648 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:00:35.098670  436648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:00:35.111167  436648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1017 20:00:35.129365  436648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:00:35.144872  436648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1017 20:00:35.163656  436648 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:00:35.168263  436648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:00:35.404120  436648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:00:35.431432  436648 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667 for IP: 192.168.76.2
	I1017 20:00:35.431451  436648 certs.go:195] generating shared ca certs ...
	I1017 20:00:35.431467  436648 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:00:35.431599  436648 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:00:35.431653  436648 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:00:35.431670  436648 certs.go:257] generating profile certs ...
	I1017 20:00:35.431764  436648 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/client.key
	I1017 20:00:35.431820  436648 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/apiserver.key.65ed7d0b
	I1017 20:00:35.431863  436648 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/proxy-client.key
	I1017 20:00:35.431973  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:00:35.432011  436648 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:00:35.432024  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:00:35.432050  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:00:35.432077  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:00:35.432103  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:00:35.432148  436648 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:00:35.432836  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:00:35.462263  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:00:35.483011  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:00:35.513533  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:00:35.535066  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1017 20:00:35.555109  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:00:35.574339  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:00:35.593822  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:00:35.613354  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:00:35.633081  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:00:35.651418  436648 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:00:35.673204  436648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:00:35.690442  436648 ssh_runner.go:195] Run: openssl version
	I1017 20:00:35.697797  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:00:35.706957  436648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:00:35.713108  436648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:00:35.713174  436648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:00:35.757442  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:00:35.765628  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:00:35.774316  436648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:00:35.777988  436648 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:00:35.778099  436648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:00:35.818886  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:00:35.826914  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:00:35.835052  436648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:00:35.839620  436648 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:00:35.839697  436648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:00:35.880815  436648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:00:35.889008  436648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:00:35.892884  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:00:35.938894  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:00:35.980667  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:00:36.023967  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:00:36.066645  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:00:36.108513  436648 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:00:36.152397  436648 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-819667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-819667 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:00:36.152489  436648 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:00:36.152598  436648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:00:36.184259  436648 cri.go:89] found id: "dc254c9104ea046e16568206f8b0d4df2bfc3770d9d2523464d2a40ef2e2a621"
	I1017 20:00:36.184287  436648 cri.go:89] found id: "70efaa70a9ae2e51b25cebc8a4343491bc98c86c779ebdec15652b01e51591e5"
	I1017 20:00:36.184292  436648 cri.go:89] found id: "4dea6c2decbc8746b522b740738bc882cf2b475a98a3b7772145843eeee4dcdc"
	I1017 20:00:36.184296  436648 cri.go:89] found id: "dddc349eafb6f3c3decae0a4fe1a77955eaf4766dc92b5de514139343894a4e1"
	I1017 20:00:36.184300  436648 cri.go:89] found id: ""
	I1017 20:00:36.184352  436648 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:00:36.196124  436648 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:36Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:00:36.196245  436648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:00:36.205925  436648 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:00:36.205999  436648 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:00:36.206090  436648 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:00:36.215450  436648 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:00:36.216205  436648 kubeconfig.go:125] found "kubernetes-upgrade-819667" server: "https://192.168.76.2:8443"
	I1017 20:00:36.217110  436648 kapi.go:59] client config for kubernetes-upgrade-819667: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:00:36.217605  436648 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 20:00:36.217625  436648 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 20:00:36.217631  436648 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 20:00:36.217636  436648 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 20:00:36.217642  436648 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 20:00:36.217913  436648 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:00:36.225877  436648 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 20:00:36.225914  436648 kubeadm.go:601] duration metric: took 19.896195ms to restartPrimaryControlPlane
	I1017 20:00:36.225924  436648 kubeadm.go:402] duration metric: took 73.537472ms to StartCluster
	I1017 20:00:36.225961  436648 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:00:36.226042  436648 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:00:36.226977  436648 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:00:36.227264  436648 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:00:36.227585  436648 config.go:182] Loaded profile config "kubernetes-upgrade-819667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:36.227733  436648 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:00:36.227806  436648 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-819667"
	I1017 20:00:36.227827  436648 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-819667"
	W1017 20:00:36.227837  436648 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:00:36.227858  436648 host.go:66] Checking if "kubernetes-upgrade-819667" exists ...
	I1017 20:00:36.228680  436648 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-819667 --format={{.State.Status}}
	I1017 20:00:36.228861  436648 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-819667"
	I1017 20:00:36.228907  436648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-819667"
	I1017 20:00:36.229201  436648 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-819667 --format={{.State.Status}}
	I1017 20:00:36.232985  436648 out.go:179] * Verifying Kubernetes components...
	I1017 20:00:36.236355  436648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:00:36.257036  436648 kapi.go:59] client config for kubernetes-upgrade-819667: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/client.crt", KeyFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/profiles/kubernetes-upgrade-819667/client.key", CAFile:"/home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120190), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:00:36.259649  436648 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-819667"
	W1017 20:00:36.259671  436648 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:00:36.259710  436648 host.go:66] Checking if "kubernetes-upgrade-819667" exists ...
	I1017 20:00:36.260169  436648 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-819667 --format={{.State.Status}}
	I1017 20:00:36.275926  436648 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:00:36.278901  436648 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:00:36.278926  436648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:00:36.279000  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:36.304247  436648 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:00:36.304278  436648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:00:36.304338  436648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-819667
	I1017 20:00:36.325702  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:36.344646  436648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33369 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/kubernetes-upgrade-819667/id_rsa Username:docker}
	I1017 20:00:36.483401  436648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:00:36.500325  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:00:36.511090  436648 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:00:36.511234  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:36.519643  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1017 20:00:36.623932  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:36.623976  436648 retry.go:31] will retry after 319.762769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 20:00:36.625790  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:36.625816  436648 retry.go:31] will retry after 290.496875ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:36.917307  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:00:36.944802  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:00:37.012275  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1017 20:00:37.032415  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.032452  436648 retry.go:31] will retry after 249.987699ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 20:00:37.067202  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.067277  436648 retry.go:31] will retry after 228.257388ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.283514  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:00:37.295907  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1017 20:00:37.367703  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.367784  436648 retry.go:31] will retry after 744.067547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 20:00:37.372298  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.372331  436648 retry.go:31] will retry after 400.825746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.511450  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:37.774148  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1017 20:00:37.837688  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:37.837717  436648 retry.go:31] will retry after 600.454251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:38.012021  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:38.112793  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1017 20:00:38.174932  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:38.174989  436648 retry.go:31] will retry after 913.935237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:38.439335  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1017 20:00:38.501158  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:38.501201  436648 retry.go:31] will retry after 1.41529173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:38.512304  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:39.011403  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:39.089464  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1017 20:00:39.148772  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:39.148806  436648 retry.go:31] will retry after 1.525469269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 20:00:36.750640  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	W1017 20:00:39.249700  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	W1017 20:00:41.250156  433027 pod_ready.go:104] pod "kube-controller-manager-pause-217784" is not "Ready", error: <nil>
	I1017 20:00:41.750661  433027 pod_ready.go:94] pod "kube-controller-manager-pause-217784" is "Ready"
	I1017 20:00:41.750693  433027 pod_ready.go:86] duration metric: took 42.505884337s for pod "kube-controller-manager-pause-217784" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:41.752984  433027 pod_ready.go:83] waiting for pod "kube-proxy-zt258" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:41.757741  433027 pod_ready.go:94] pod "kube-proxy-zt258" is "Ready"
	I1017 20:00:41.757764  433027 pod_ready.go:86] duration metric: took 4.756545ms for pod "kube-proxy-zt258" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:41.759846  433027 pod_ready.go:83] waiting for pod "kube-scheduler-pause-217784" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:41.764381  433027 pod_ready.go:94] pod "kube-scheduler-pause-217784" is "Ready"
	I1017 20:00:41.764407  433027 pod_ready.go:86] duration metric: took 4.53566ms for pod "kube-scheduler-pause-217784" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:41.764421  433027 pod_ready.go:40] duration metric: took 51.043627718s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:00:41.826919  433027 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:00:41.830113  433027 out.go:179] * Done! kubectl is now configured to use "pause-217784" cluster and "default" namespace by default
	I1017 20:00:39.511441  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:39.916751  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1017 20:00:39.981021  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:39.981055  436648 retry.go:31] will retry after 2.589513284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:40.011342  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:40.511385  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:40.674533  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1017 20:00:40.739590  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:40.739618  436648 retry.go:31] will retry after 1.259707865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:41.012094  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:41.511874  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:41.999517  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:00:42.012139  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1017 20:00:42.134408  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:42.134445  436648 retry.go:31] will retry after 3.573402572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:42.512059  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:42.571414  436648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1017 20:00:42.634749  436648 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:42.634784  436648 retry.go:31] will retry after 3.425685548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 20:00:43.011922  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:43.512176  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:00:44.011328  436648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.211841352Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.211874426Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.215156517Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.215192717Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.21521423Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.218496149Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.218527919Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.21854935Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.221637683Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.221668648Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.221692959Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.225386034Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.225417492Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.225439046Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.228559049Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 19:59:55 pause-217784 crio[2197]: time="2025-10-17T19:59:55.22859045Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.749789214Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=de418f21-b11f-4d79-bd1c-e821f2fb8951 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.751923341Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=f219eb93-ca08-4bb6-af9f-e494bf86dc5d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.753013781Z" level=info msg="Creating container: kube-system/kube-controller-manager-pause-217784/kube-controller-manager" id=8c6b5b19-71d5-4960-b8f6-61e51d7dda5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.753237447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.765893325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.766691088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.801990925Z" level=info msg="Created container 0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006: kube-system/kube-controller-manager-pause-217784/kube-controller-manager" id=8c6b5b19-71d5-4960-b8f6-61e51d7dda5c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.807234898Z" level=info msg="Starting container: 0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006" id=49042e59-1f34-48b3-977c-c53e51ac78e0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:00:31 pause-217784 crio[2197]: time="2025-10-17T20:00:31.818590464Z" level=info msg="Started container" PID=2784 containerID=0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006 description=kube-system/kube-controller-manager-pause-217784/kube-controller-manager id=49042e59-1f34-48b3-977c-c53e51ac78e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=99ceeba29f6c0cfb23dfe4cc17d9c05a9df753ff75ea6c8007be87ad9cdb5105
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0e7006964e34f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago       Running             kube-controller-manager   3                   99ceeba29f6c0       kube-controller-manager-pause-217784   kube-system
	62588c2119e6d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Running             etcd                      2                   ac418e8ede010       etcd-pause-217784                      kube-system
	f9b2cb8da0165       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Running             coredns                   2                   2f371a5cc2c3a       coredns-66bc5c9577-g5z7h               kube-system
	96b630dc738ba       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Running             kube-scheduler            2                   cd9ffa2516d01       kube-scheduler-pause-217784            kube-system
	ac7ec3d90033a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Running             kube-apiserver            2                   e1321335f7701       kube-apiserver-pause-217784            kube-system
	3fd4475a37d18       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Running             kube-proxy                2                   a1aba6eb6b84d       kube-proxy-zt258                       kube-system
	674ea3bea7ff9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Running             kindnet-cni               2                   dbcebc531e2d1       kindnet-46jpk                          kube-system
	35101c6831df1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   2                   99ceeba29f6c0       kube-controller-manager-pause-217784   kube-system
	f65fdf97b4d90       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            1                   cd9ffa2516d01       kube-scheduler-pause-217784            kube-system
	d3af5d8cf3e85       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            1                   e1321335f7701       kube-apiserver-pause-217784            kube-system
	a2dfb5e26ac71       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               1                   dbcebc531e2d1       kindnet-46jpk                          kube-system
	e00ec46155335       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      1                   ac418e8ede010       etcd-pause-217784                      kube-system
	b5d8399275d88       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Exited              coredns                   1                   2f371a5cc2c3a       coredns-66bc5c9577-g5z7h               kube-system
	8142d317a44bb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                1                   a1aba6eb6b84d       kube-proxy-zt258                       kube-system
	
	
	==> coredns [b5d8399275d880bb3281f1eef3884a684e6c9909d2b4a7142a465337ebb920e3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51058 - 48861 "HINFO IN 8856667112206479484.8285220164548607494. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02605722s
	
	
	==> coredns [f9b2cb8da0165e5e84d72b243c5e3fd7d4e8e1dc2acf5e407090f93c881f74d2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39749 - 8425 "HINFO IN 867509571052015759.8496768120052480553. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.031894302s
	
	
	==> describe nodes <==
	Name:               pause-217784
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-217784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=pause-217784
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_58_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:58:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-217784
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:00:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:00:36 +0000   Fri, 17 Oct 2025 19:58:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:00:36 +0000   Fri, 17 Oct 2025 19:58:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:00:36 +0000   Fri, 17 Oct 2025 19:58:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:00:36 +0000   Fri, 17 Oct 2025 19:58:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-217784
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2fa317a1-f859-4a8f-a0f8-dd31253c3cc3
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-g5z7h                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m29s
	  kube-system                 etcd-pause-217784                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m35s
	  kube-system                 kindnet-46jpk                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m30s
	  kube-system                 kube-apiserver-pause-217784             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-controller-manager-pause-217784    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-proxy-zt258                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-scheduler-pause-217784             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m28s  kube-proxy       
	  Normal   Starting                 57s    kube-proxy       
	  Normal   Starting                 2m35s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m35s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s  kubelet          Node pause-217784 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s  kubelet          Node pause-217784 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s  kubelet          Node pause-217784 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m31s  node-controller  Node pause-217784 event: Registered Node pause-217784 in Controller
	  Normal   NodeReady                109s   kubelet          Node pause-217784 status is now: NodeReady
	  Warning  ContainerGCFailed        95s    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11s    node-controller  Node pause-217784 event: Registered Node pause-217784 in Controller
	
	
	==> dmesg <==
	[  +4.119232] overlayfs: idmapped layers are currently not supported
	[Oct17 19:32] overlayfs: idmapped layers are currently not supported
	[  +2.727676] overlayfs: idmapped layers are currently not supported
	[ +41.644994] overlayfs: idmapped layers are currently not supported
	[Oct17 19:33] overlayfs: idmapped layers are currently not supported
	[Oct17 19:34] overlayfs: idmapped layers are currently not supported
	[Oct17 19:36] overlayfs: idmapped layers are currently not supported
	[Oct17 19:41] overlayfs: idmapped layers are currently not supported
	[ +34.896999] overlayfs: idmapped layers are currently not supported
	[Oct17 19:42] overlayfs: idmapped layers are currently not supported
	[Oct17 19:43] overlayfs: idmapped layers are currently not supported
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [62588c2119e6d0606bb21463fe47ed8567945aaf80b48732876e92bd3aac6d3c] <==
	{"level":"warn","ts":"2025-10-17T19:59:47.845871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.867500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.887496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.913897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.933760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.949394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.962251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.981084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:47.996303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.025270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.064032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.084982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.102189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.119576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.138730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.169741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.187588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.201513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.240705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.281155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.302794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.337101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.354893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.379327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:59:48.454559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49446","server-name":"","error":"EOF"}
	
	
	==> etcd [e00ec461553354a63089e70d55be3852e68c0e75fb8407e6ddbd77706f937bb5] <==
	{"level":"info","ts":"2025-10-17T19:59:09.744762Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"warn","ts":"2025-10-17T19:59:09.762341Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-17T19:59:09.763277Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-17T19:59:09.764634Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T19:59:09.764673Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-17T19:59:09.764988Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-17T19:59:09.794012Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T19:59:10.119706Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-17T19:59:10.119770Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-217784","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-17T19:59:10.119897Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T19:59:10.123237Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T19:59:10.123311Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:59:10.123331Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-17T19:59:10.123413Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-17T19:59:10.123433Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-17T19:59:10.123633Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T19:59:10.123651Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T19:59:10.123662Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-17T19:59:10.123585Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T19:59:10.123699Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T19:59:10.123705Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:59:10.133295Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-17T19:59:10.133396Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T19:59:10.133446Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-17T19:59:10.133453Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-217784","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 20:00:47 up  2:43,  0 user,  load average: 2.40, 2.64, 2.31
	Linux pause-217784 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [674ea3bea7ff943a48ca4af34bde9cc6f0e26dd205525997435f1c2327b22556] <==
	E1017 19:59:35.209302       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 19:59:35.209447       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 19:59:36.100375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 19:59:36.525901       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 19:59:36.544734       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 19:59:36.789268       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 19:59:38.407582       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 19:59:38.533430       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 19:59:39.257883       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 19:59:39.584469       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1017 19:59:49.308361       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:59:49.308465       1 metrics.go:72] Registering metrics
	I1017 19:59:49.308943       1 controller.go:711] "Syncing nftables rules"
	I1017 19:59:55.207596       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 19:59:55.207658       1 main.go:301] handling current node
	I1017 20:00:05.207297       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:00:05.207334       1 main.go:301] handling current node
	I1017 20:00:15.212572       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:00:15.212684       1 main.go:301] handling current node
	I1017 20:00:25.211580       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:00:25.211612       1 main.go:301] handling current node
	I1017 20:00:35.207658       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:00:35.208040       1 main.go:301] handling current node
	I1017 20:00:45.208638       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:00:45.208685       1 main.go:301] handling current node
	
	
	==> kindnet [a2dfb5e26ac71f5212fffeb91e67e0e371348b88a23fa9cba8152e7f4ac1cc12] <==
	I1017 19:59:09.318912       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 19:59:09.319165       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 19:59:09.319298       1 main.go:148] setting mtu 1500 for CNI 
	I1017 19:59:09.319309       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 19:59:09.319319       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T19:59:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 19:59:09.601225       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 19:59:09.601317       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 19:59:09.601351       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 19:59:09.601708       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [ac7ec3d90033a6dde87d8b2bc23b9d6e5c887a94e0db8a34e9e454c1ad12f17a] <==
	I1017 19:59:49.096205       1 shared_informer.go:349] "Waiting for caches to sync" controller="kubernetes-service-cidr-controller"
	I1017 19:59:49.255255       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 19:59:49.261760       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 19:59:49.261854       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 19:59:49.268351       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 19:59:49.268572       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 19:59:49.268666       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 19:59:49.269670       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 19:59:49.269755       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 19:59:49.269801       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 19:59:49.286652       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 19:59:49.286856       1 policy_source.go:240] refreshing policies
	I1017 19:59:49.296257       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 19:59:49.296306       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 19:59:49.301225       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 19:59:49.301969       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 19:59:49.302609       1 aggregator.go:171] initial CRD sync complete...
	I1017 19:59:49.302673       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 19:59:49.302703       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 19:59:49.302733       1 cache.go:39] Caches are synced for autoregister controller
	I1017 19:59:49.316493       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1017 19:59:49.319187       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 19:59:49.334864       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 19:59:49.970994       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:00:35.577465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-apiserver [d3af5d8cf3e85823f42bfa25e4df0cbc4644772954310529fa40dc6570250b0c] <==
	I1017 19:59:09.469351       1 options.go:263] external host was not specified, using 192.168.85.2
	I1017 19:59:09.477925       1 server.go:150] Version: v1.34.1
	I1017 19:59:09.478040       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [0e7006964e34fff23229d107c1ced6a1ba86c3e37a57059a480d06d19cea3006] <==
	I1017 20:00:36.931854       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:00:36.934208       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:00:36.934278       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:00:36.934311       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:00:36.937567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:00:36.939049       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:00:36.943615       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:00:36.947647       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:00:36.950054       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:00:36.951722       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 20:00:36.959239       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:00:36.959360       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:00:36.959442       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:00:36.959542       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:00:36.959758       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:00:36.959849       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 20:00:36.959989       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:00:36.960010       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:00:36.960895       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:00:36.960393       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:00:36.961219       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-217784"
	I1017 20:00:36.961308       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:00:36.965682       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:00:36.965822       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:00:36.967995       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-controller-manager [35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3] <==
	I1017 19:59:31.944793       1 serving.go:386] Generated self-signed cert in-memory
	I1017 19:59:32.571990       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 19:59:32.572022       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:59:32.573529       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 19:59:32.573714       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1017 19:59:32.573929       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 19:59:32.573979       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 19:59:49.201600       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [3fd4475a37d18c00aec1ef703d573e6e5fb6655507ad68d1fca8ae80ede45d04] <==
	I1017 19:59:35.821758       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:59:35.910035       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1017 19:59:35.910889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-217784&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:59:36.913130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-217784&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:59:39.231732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-217784&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1017 19:59:49.318361       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:59:49.318456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 19:59:49.318605       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:59:49.347600       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:59:49.347651       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:59:49.358248       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:59:49.358604       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:59:49.358806       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:59:49.360495       1 config.go:200] "Starting service config controller"
	I1017 19:59:49.360575       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:59:49.360628       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:59:49.360683       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:59:49.360726       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:59:49.360763       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:59:49.361393       1 config.go:309] "Starting node config controller"
	I1017 19:59:49.361991       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:59:49.362069       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:59:49.462411       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:59:49.467470       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:59:49.476599       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [8142d317a44bbe309ab0386847561af9cc42546e023be675b04898c245530117] <==
	I1017 19:59:09.376608       1 server_linux.go:53] "Using iptables proxy"
	
	
	==> kube-scheduler [96b630dc738baaa3ae91f61e89650eaff48265721a8893be95ca1c3b57d64c6e] <==
	I1017 19:59:43.289805       1 serving.go:386] Generated self-signed cert in-memory
	W1017 19:59:49.204640       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 19:59:49.204745       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 19:59:49.204778       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 19:59:49.204818       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 19:59:49.256876       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 19:59:49.256980       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:59:49.265760       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:59:49.265937       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:59:49.268613       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:59:49.268704       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 19:59:49.368970       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f65fdf97b4d906f7856f0df6988ecb8924864dd7377a0f64601e508eb40b7458] <==
	
	
	==> kubelet <==
	Oct 17 19:59:43 pause-217784 kubelet[1309]: W1017 19:59:43.008612    1309 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 17 19:59:44 pause-217784 kubelet[1309]: I1017 19:59:44.751290    1309 scope.go:117] "RemoveContainer" containerID="e00ec461553354a63089e70d55be3852e68c0e75fb8407e6ddbd77706f937bb5"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.035019    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-217784\" is forbidden: User \"system:node:pause-217784\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" podUID="8e0af0c855e8d2c1fffbec063d7c38ca" pod="kube-system/kube-scheduler-pause-217784"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.035526    1309 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-217784\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.079906    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-217784\" is forbidden: User \"system:node:pause-217784\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" podUID="adbf34715131edf4a0adf073cdfefb0d" pod="kube-system/etcd-pause-217784"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.201970    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-46jpk\" is forbidden: User \"system:node:pause-217784\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" podUID="03412f46-522b-4ba3-8a9d-f1453429ea60" pod="kube-system/kindnet-46jpk"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.234676    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-zt258\" is forbidden: User \"system:node:pause-217784\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" podUID="b2ec80fc-103f-4e5d-a8d8-ba147dc8c2df" pod="kube-system/kube-proxy-zt258"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.249916    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-g5z7h\" is forbidden: User \"system:node:pause-217784\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-217784' and this object" podUID="aa9d1acc-4613-468b-af2d-c1907dd42048" pod="kube-system/coredns-66bc5c9577-g5z7h"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: I1017 19:59:49.280285    1309 scope.go:117] "RemoveContainer" containerID="8a2e0c0e3cf515d3df5cb05835c9998c9772491a9626eb43759688eabe46cd3d"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: I1017 19:59:49.285299    1309 scope.go:117] "RemoveContainer" containerID="35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	Oct 17 19:59:49 pause-217784 kubelet[1309]: E1017 19:59:49.285537    1309 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-217784_kube-system(3bb112e926c4becd104d4b9d920bd51e)\"" pod="kube-system/kube-controller-manager-pause-217784" podUID="3bb112e926c4becd104d4b9d920bd51e"
	Oct 17 19:59:52 pause-217784 kubelet[1309]: I1017 19:59:52.636111    1309 scope.go:117] "RemoveContainer" containerID="35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	Oct 17 19:59:52 pause-217784 kubelet[1309]: E1017 19:59:52.636775    1309 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-217784_kube-system(3bb112e926c4becd104d4b9d920bd51e)\"" pod="kube-system/kube-controller-manager-pause-217784" podUID="3bb112e926c4becd104d4b9d920bd51e"
	Oct 17 20:00:02 pause-217784 kubelet[1309]: I1017 20:00:02.750298    1309 scope.go:117] "RemoveContainer" containerID="35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	Oct 17 20:00:02 pause-217784 kubelet[1309]: E1017 20:00:02.750462    1309 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-217784_kube-system(3bb112e926c4becd104d4b9d920bd51e)\"" pod="kube-system/kube-controller-manager-pause-217784" podUID="3bb112e926c4becd104d4b9d920bd51e"
	Oct 17 20:00:12 pause-217784 kubelet[1309]: E1017 20:00:12.734582    1309 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7a74ddc9c1829da1c91cce2f0c341a07b58d5109d1055c0a5979517ae088341b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7a74ddc9c1829da1c91cce2f0c341a07b58d5109d1055c0a5979517ae088341b/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-pause-217784_d71e9b68718e4348632412d448990b5b/kube-apiserver/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-pause-217784_d71e9b68718e4348632412d448990b5b/kube-apiserver/0.log: no such file or directory
	Oct 17 20:00:12 pause-217784 kubelet[1309]: E1017 20:00:12.740895    1309 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a632040ca92db85595274657ce7649f0263caeb3670f0fe9def9dc496cb56aef/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a632040ca92db85595274657ce7649f0263caeb3670f0fe9def9dc496cb56aef/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_etcd-pause-217784_adbf34715131edf4a0adf073cdfefb0d/etcd/0.log" to get inode usage: stat /var/log/pods/kube-system_etcd-pause-217784_adbf34715131edf4a0adf073cdfefb0d/etcd/0.log: no such file or directory
	Oct 17 20:00:12 pause-217784 kubelet[1309]: E1017 20:00:12.767101    1309 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5a99844d65cad5e8b9dbe26d7e176134333c146fc29eb4908864bee5564bf424/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5a99844d65cad5e8b9dbe26d7e176134333c146fc29eb4908864bee5564bf424/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-scheduler-pause-217784_8e0af0c855e8d2c1fffbec063d7c38ca/kube-scheduler/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-scheduler-pause-217784_8e0af0c855e8d2c1fffbec063d7c38ca/kube-scheduler/0.log: no such file or directory
	Oct 17 20:00:12 pause-217784 kubelet[1309]: E1017 20:00:12.776547    1309 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5104298c8460fc62473f9597602ca69d7422488f68e6061340909086204f0737/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5104298c8460fc62473f9597602ca69d7422488f68e6061340909086204f0737/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-pause-217784_3bb112e926c4becd104d4b9d920bd51e/kube-controller-manager/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-pause-217784_3bb112e926c4becd104d4b9d920bd51e/kube-controller-manager/0.log: no such file or directory
	Oct 17 20:00:17 pause-217784 kubelet[1309]: I1017 20:00:17.748802    1309 scope.go:117] "RemoveContainer" containerID="35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	Oct 17 20:00:17 pause-217784 kubelet[1309]: E1017 20:00:17.749706    1309 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-pause-217784_kube-system(3bb112e926c4becd104d4b9d920bd51e)\"" pod="kube-system/kube-controller-manager-pause-217784" podUID="3bb112e926c4becd104d4b9d920bd51e"
	Oct 17 20:00:31 pause-217784 kubelet[1309]: I1017 20:00:31.748497    1309 scope.go:117] "RemoveContainer" containerID="35101c6831df164efd0fe6402576f945fa6c3b23f28742ea5838dbd41250deb3"
	Oct 17 20:00:42 pause-217784 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:00:42 pause-217784 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:00:42 pause-217784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-217784 -n pause-217784
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-217784 -n pause-217784: exit status 2 (373.67552ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-217784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-135652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-135652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (262.832295ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:03:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-135652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-135652 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-135652 describe deploy/metrics-server -n kube-system: exit status 1 (81.435829ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-135652 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-135652
helpers_test.go:243: (dbg) docker inspect old-k8s-version-135652:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86",
	        "Created": "2025-10-17T20:02:51.429282597Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 450799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:02:51.49680557Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/hostname",
	        "HostsPath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/hosts",
	        "LogPath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86-json.log",
	        "Name": "/old-k8s-version-135652",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-135652:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-135652",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86",
	                "LowerDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220/merged",
	                "UpperDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220/diff",
	                "WorkDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-135652",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-135652/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-135652",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-135652",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-135652",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "615a8b7ef91c1c1cc5510a1fc36db2cacdb0ff2d32aab565d80274b0ad243fb5",
	            "SandboxKey": "/var/run/docker/netns/615a8b7ef91c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-135652": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:be:10:c5:c0:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90204cc66e7ad6745643724a78275aac28eb4a09363d718713af2fa28c9cb97d",
	                    "EndpointID": "ec7abe1625783f7e73a3cee3875e0319920cf5fae07dd93a07319db0513414ba",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-135652",
	                        "b175bb475b3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135652 -n old-k8s-version-135652
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-135652 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-135652 logs -n 25: (1.157402055s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-804622 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo containerd config dump                                                                                                                                                                                                  │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo crio config                                                                                                                                                                                                             │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ delete  │ -p cilium-804622                                                                                                                                                                                                                              │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ start   │ -p force-systemd-env-945733 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-945733  │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ force-systemd-flag-285387 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-285387 │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ delete  │ -p force-systemd-flag-285387                                                                                                                                                                                                                  │ force-systemd-flag-285387 │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164379    │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p force-systemd-env-945733                                                                                                                                                                                                                   │ force-systemd-env-945733  │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p cert-options-533238 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ cert-options-533238 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ -p cert-options-533238 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p cert-options-533238                                                                                                                                                                                                                        │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-135652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:02:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:02:45.384797  450411 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:02:45.385378  450411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:02:45.385432  450411 out.go:374] Setting ErrFile to fd 2...
	I1017 20:02:45.385529  450411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:02:45.386194  450411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:02:45.387730  450411 out.go:368] Setting JSON to false
	I1017 20:02:45.389456  450411 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9916,"bootTime":1760721449,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:02:45.389751  450411 start.go:141] virtualization:  
	I1017 20:02:45.394698  450411 out.go:179] * [old-k8s-version-135652] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:02:45.400024  450411 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:02:45.400739  450411 notify.go:220] Checking for updates...
	I1017 20:02:45.408330  450411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:02:45.413835  450411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:02:45.417202  450411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:02:45.420494  450411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:02:45.423704  450411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:02:45.428079  450411 config.go:182] Loaded profile config "cert-expiration-164379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:02:45.428199  450411 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:02:45.467747  450411 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:02:45.467912  450411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:02:45.529985  450411 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:02:45.518347827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:02:45.530096  450411 docker.go:318] overlay module found
	I1017 20:02:45.533372  450411 out.go:179] * Using the docker driver based on user configuration
	I1017 20:02:45.536392  450411 start.go:305] selected driver: docker
	I1017 20:02:45.536415  450411 start.go:925] validating driver "docker" against <nil>
	I1017 20:02:45.536429  450411 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:02:45.537321  450411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:02:45.595769  450411 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:02:45.585967618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:02:45.595935  450411 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:02:45.596175  450411 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:02:45.599272  450411 out.go:179] * Using Docker driver with root privileges
	I1017 20:02:45.602198  450411 cni.go:84] Creating CNI manager for ""
	I1017 20:02:45.602279  450411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:02:45.602296  450411 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:02:45.602389  450411 start.go:349] cluster config:
	{Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:02:45.605605  450411 out.go:179] * Starting "old-k8s-version-135652" primary control-plane node in "old-k8s-version-135652" cluster
	I1017 20:02:45.608548  450411 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:02:45.611566  450411 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:02:45.614491  450411 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 20:02:45.614557  450411 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1017 20:02:45.614578  450411 cache.go:58] Caching tarball of preloaded images
	I1017 20:02:45.614587  450411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:02:45.614718  450411 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:02:45.614730  450411 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1017 20:02:45.614854  450411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/config.json ...
	I1017 20:02:45.614882  450411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/config.json: {Name:mk3fd12a4af4a48eaa80bab39580a52b3dcf5140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:02:45.635236  450411 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:02:45.635280  450411 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:02:45.635363  450411 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:02:45.635406  450411 start.go:360] acquireMachinesLock for old-k8s-version-135652: {Name:mkb7e5198ce4bb901f93d40f8941ec8842fd8eb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:02:45.635621  450411 start.go:364] duration metric: took 146.064µs to acquireMachinesLock for "old-k8s-version-135652"
	I1017 20:02:45.635727  450411 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:02:45.635859  450411 start.go:125] createHost starting for "" (driver="docker")
	I1017 20:02:45.639502  450411 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:02:45.639740  450411 start.go:159] libmachine.API.Create for "old-k8s-version-135652" (driver="docker")
	I1017 20:02:45.639791  450411 client.go:168] LocalClient.Create starting
	I1017 20:02:45.639888  450411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem
	I1017 20:02:45.639928  450411 main.go:141] libmachine: Decoding PEM data...
	I1017 20:02:45.639947  450411 main.go:141] libmachine: Parsing certificate...
	I1017 20:02:45.640002  450411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem
	I1017 20:02:45.640023  450411 main.go:141] libmachine: Decoding PEM data...
	I1017 20:02:45.640038  450411 main.go:141] libmachine: Parsing certificate...
	I1017 20:02:45.640427  450411 cli_runner.go:164] Run: docker network inspect old-k8s-version-135652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:02:45.655573  450411 cli_runner.go:211] docker network inspect old-k8s-version-135652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:02:45.655664  450411 network_create.go:284] running [docker network inspect old-k8s-version-135652] to gather additional debugging logs...
	I1017 20:02:45.655681  450411 cli_runner.go:164] Run: docker network inspect old-k8s-version-135652
	W1017 20:02:45.671917  450411 cli_runner.go:211] docker network inspect old-k8s-version-135652 returned with exit code 1
	I1017 20:02:45.671958  450411 network_create.go:287] error running [docker network inspect old-k8s-version-135652]: docker network inspect old-k8s-version-135652: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-135652 not found
	I1017 20:02:45.671972  450411 network_create.go:289] output of [docker network inspect old-k8s-version-135652]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-135652 not found
	
	** /stderr **
	I1017 20:02:45.672071  450411 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:02:45.690626  450411 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f667d9c3ea2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:fc:1d:c6:d2:da} reservation:<nil>}
	I1017 20:02:45.691000  450411 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-82a22734829b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:5a:78:c5:e0:0a} reservation:<nil>}
	I1017 20:02:45.691430  450411 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b88bd3b523f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:75:74:cd:15:9b} reservation:<nil>}
	I1017 20:02:45.691938  450411 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a52340}
	I1017 20:02:45.691981  450411 network_create.go:124] attempt to create docker network old-k8s-version-135652 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1017 20:02:45.692049  450411 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-135652 old-k8s-version-135652
	I1017 20:02:45.762670  450411 network_create.go:108] docker network old-k8s-version-135652 192.168.76.0/24 created
	I1017 20:02:45.762704  450411 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-135652" container
	I1017 20:02:45.762780  450411 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:02:45.781181  450411 cli_runner.go:164] Run: docker volume create old-k8s-version-135652 --label name.minikube.sigs.k8s.io=old-k8s-version-135652 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:02:45.800581  450411 oci.go:103] Successfully created a docker volume old-k8s-version-135652
	I1017 20:02:45.800683  450411 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-135652-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-135652 --entrypoint /usr/bin/test -v old-k8s-version-135652:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:02:46.357038  450411 oci.go:107] Successfully prepared a docker volume old-k8s-version-135652
	I1017 20:02:46.357086  450411 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 20:02:46.357105  450411 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:02:46.357191  450411 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-135652:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 20:02:51.358539  450411 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-135652:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.001313157s)
	I1017 20:02:51.358570  450411 kic.go:203] duration metric: took 5.001461879s to extract preloaded images to volume ...
	W1017 20:02:51.358715  450411 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 20:02:51.358880  450411 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:02:51.413605  450411 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-135652 --name old-k8s-version-135652 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-135652 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-135652 --network old-k8s-version-135652 --ip 192.168.76.2 --volume old-k8s-version-135652:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:02:51.730997  450411 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Running}}
	I1017 20:02:51.757787  450411 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:02:51.784681  450411 cli_runner.go:164] Run: docker exec old-k8s-version-135652 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:02:51.843437  450411 oci.go:144] the created container "old-k8s-version-135652" has a running status.
	I1017 20:02:51.843478  450411 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa...
	I1017 20:02:52.049340  450411 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:02:52.080216  450411 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:02:52.110482  450411 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:02:52.110507  450411 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-135652 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:02:52.182323  450411 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:02:52.211011  450411 machine.go:93] provisionDockerMachine start ...
	I1017 20:02:52.211095  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:02:52.234538  450411 main.go:141] libmachine: Using SSH client type: native
	I1017 20:02:52.234989  450411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1017 20:02:52.235003  450411 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:02:52.235698  450411 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:02:55.384107  450411 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-135652
	
	I1017 20:02:55.384131  450411 ubuntu.go:182] provisioning hostname "old-k8s-version-135652"
	I1017 20:02:55.384193  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:02:55.401944  450411 main.go:141] libmachine: Using SSH client type: native
	I1017 20:02:55.402267  450411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1017 20:02:55.402285  450411 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-135652 && echo "old-k8s-version-135652" | sudo tee /etc/hostname
	I1017 20:02:55.558515  450411 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-135652
	
	I1017 20:02:55.558653  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:02:55.576987  450411 main.go:141] libmachine: Using SSH client type: native
	I1017 20:02:55.577294  450411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1017 20:02:55.577825  450411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-135652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-135652/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-135652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:02:55.728349  450411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:02:55.728375  450411 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:02:55.728395  450411 ubuntu.go:190] setting up certificates
	I1017 20:02:55.728404  450411 provision.go:84] configureAuth start
	I1017 20:02:55.728479  450411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135652
	I1017 20:02:55.751867  450411 provision.go:143] copyHostCerts
	I1017 20:02:55.751938  450411 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:02:55.751954  450411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:02:55.752035  450411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:02:55.752146  450411 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:02:55.752158  450411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:02:55.752187  450411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:02:55.752261  450411 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:02:55.752271  450411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:02:55.752298  450411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:02:55.752391  450411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-135652 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-135652]
	I1017 20:02:56.249229  450411 provision.go:177] copyRemoteCerts
	I1017 20:02:56.249304  450411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:02:56.249352  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:02:56.268825  450411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:02:56.389793  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:02:56.407953  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1017 20:02:56.426195  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:02:56.443751  450411 provision.go:87] duration metric: took 715.306221ms to configureAuth
	I1017 20:02:56.443778  450411 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:02:56.443960  450411 config.go:182] Loaded profile config "old-k8s-version-135652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:02:56.444063  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:02:56.461519  450411 main.go:141] libmachine: Using SSH client type: native
	I1017 20:02:56.461829  450411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1017 20:02:56.461848  450411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:02:56.734998  450411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:02:56.735018  450411 machine.go:96] duration metric: took 4.523987725s to provisionDockerMachine
	I1017 20:02:56.735027  450411 client.go:171] duration metric: took 11.095224304s to LocalClient.Create
	I1017 20:02:56.735041  450411 start.go:167] duration metric: took 11.095308658s to libmachine.API.Create "old-k8s-version-135652"
	I1017 20:02:56.735047  450411 start.go:293] postStartSetup for "old-k8s-version-135652" (driver="docker")
	I1017 20:02:56.735057  450411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:02:56.735129  450411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:02:56.735170  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:02:56.752816  450411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:02:56.856407  450411 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:02:56.860260  450411 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:02:56.860290  450411 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:02:56.860301  450411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:02:56.860357  450411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:02:56.860444  450411 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:02:56.860592  450411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:02:56.868014  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:02:56.886649  450411 start.go:296] duration metric: took 151.585961ms for postStartSetup
	I1017 20:02:56.887125  450411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135652
	I1017 20:02:56.909296  450411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/config.json ...
	I1017 20:02:56.909594  450411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:02:56.909644  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:02:56.926139  450411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:02:57.025988  450411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:02:57.031202  450411 start.go:128] duration metric: took 11.395327725s to createHost
	I1017 20:02:57.031227  450411 start.go:83] releasing machines lock for "old-k8s-version-135652", held for 11.395552908s
	I1017 20:02:57.031320  450411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135652
	I1017 20:02:57.049840  450411 ssh_runner.go:195] Run: cat /version.json
	I1017 20:02:57.049897  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:02:57.049912  450411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:02:57.050008  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:02:57.068257  450411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:02:57.071939  450411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:02:57.168137  450411 ssh_runner.go:195] Run: systemctl --version
	I1017 20:02:57.259988  450411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:02:57.301254  450411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:02:57.306084  450411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:02:57.306180  450411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:02:57.334550  450411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 20:02:57.334630  450411 start.go:495] detecting cgroup driver to use...
	I1017 20:02:57.334681  450411 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:02:57.334761  450411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:02:57.352628  450411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:02:57.365546  450411 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:02:57.365631  450411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:02:57.385060  450411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:02:57.404470  450411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:02:57.534138  450411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:02:57.656915  450411 docker.go:234] disabling docker service ...
	I1017 20:02:57.656994  450411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:02:57.678520  450411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:02:57.694005  450411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:02:57.818462  450411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:02:57.939130  450411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:02:57.951864  450411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:02:57.966393  450411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1017 20:02:57.966510  450411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:02:57.975293  450411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:02:57.975410  450411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:02:57.984480  450411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:02:57.993382  450411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:02:58.005038  450411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:02:58.013914  450411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:02:58.023075  450411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:02:58.038081  450411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:02:58.047733  450411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:02:58.056035  450411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:02:58.063964  450411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:02:58.184475  450411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:02:58.313008  450411 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:02:58.313128  450411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:02:58.316962  450411 start.go:563] Will wait 60s for crictl version
	I1017 20:02:58.317075  450411 ssh_runner.go:195] Run: which crictl
	I1017 20:02:58.320609  450411 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:02:58.347221  450411 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:02:58.347373  450411 ssh_runner.go:195] Run: crio --version
	I1017 20:02:58.375379  450411 ssh_runner.go:195] Run: crio --version
	I1017 20:02:58.412145  450411 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1017 20:02:58.414974  450411 cli_runner.go:164] Run: docker network inspect old-k8s-version-135652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:02:58.431356  450411 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 20:02:58.435697  450411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:02:58.447289  450411 kubeadm.go:883] updating cluster {Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:02:58.447415  450411 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 20:02:58.447488  450411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:02:58.479641  450411 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:02:58.479666  450411 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:02:58.479720  450411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:02:58.507304  450411 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:02:58.507325  450411 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:02:58.507334  450411 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1017 20:02:58.507432  450411 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-135652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:02:58.507521  450411 ssh_runner.go:195] Run: crio config
	I1017 20:02:58.566866  450411 cni.go:84] Creating CNI manager for ""
	I1017 20:02:58.566893  450411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:02:58.566912  450411 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:02:58.566936  450411 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-135652 NodeName:old-k8s-version-135652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:02:58.567071  450411 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-135652"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:02:58.567139  450411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1017 20:02:58.574715  450411 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:02:58.574836  450411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:02:58.582153  450411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1017 20:02:58.595165  450411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:02:58.607541  450411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1017 20:02:58.620077  450411 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:02:58.623259  450411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:02:58.633014  450411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:02:58.756862  450411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:02:58.773267  450411 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652 for IP: 192.168.76.2
	I1017 20:02:58.773337  450411 certs.go:195] generating shared ca certs ...
	I1017 20:02:58.773368  450411 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:02:58.773542  450411 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:02:58.773620  450411 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:02:58.773644  450411 certs.go:257] generating profile certs ...
	I1017 20:02:58.773746  450411 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.key
	I1017 20:02:58.773793  450411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt with IP's: []
	I1017 20:02:59.552634  450411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt ...
	I1017 20:02:59.552710  450411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt: {Name:mk132f95947006122b673c7fa4cb0a6bb6e63a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:02:59.552964  450411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.key ...
	I1017 20:02:59.553004  450411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.key: {Name:mk9486159c985893d0bb2d7480f02a46a141e6a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:02:59.553143  450411 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.key.7915436e
	I1017 20:02:59.553188  450411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.crt.7915436e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1017 20:03:00.498016  450411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.crt.7915436e ...
	I1017 20:03:00.498098  450411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.crt.7915436e: {Name:mk4fdc24b68f13a6615feec47a7a929bebb794b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:03:00.498332  450411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.key.7915436e ...
	I1017 20:03:00.498376  450411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.key.7915436e: {Name:mkaeb27f78494cc0b5fa23909b0088a00df1f5ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:03:00.498510  450411 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.crt.7915436e -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.crt
	I1017 20:03:00.498667  450411 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.key.7915436e -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.key
	I1017 20:03:00.498773  450411 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.key
	I1017 20:03:00.498824  450411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.crt with IP's: []
	I1017 20:03:00.979788  450411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.crt ...
	I1017 20:03:00.979822  450411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.crt: {Name:mk3cbb9780a0b89e845695ec1823f287a786acd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:03:00.980022  450411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.key ...
	I1017 20:03:00.980037  450411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.key: {Name:mkf2dd7df42de4e89337f0c143244443a580fec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:03:00.980228  450411 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:03:00.980273  450411 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:03:00.980287  450411 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:03:00.980315  450411 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:03:00.980340  450411 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:03:00.980368  450411 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:03:00.980412  450411 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:03:00.981048  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:03:01.001573  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:03:01.020648  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:03:01.041011  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:03:01.060702  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1017 20:03:01.079089  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:03:01.098263  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:03:01.116876  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:03:01.135951  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:03:01.155201  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:03:01.174522  450411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:03:01.193769  450411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:03:01.207971  450411 ssh_runner.go:195] Run: openssl version
	I1017 20:03:01.214845  450411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:03:01.224344  450411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:03:01.228256  450411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:03:01.228364  450411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:03:01.271943  450411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:03:01.282170  450411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:03:01.294211  450411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:03:01.300934  450411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:03:01.301036  450411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:03:01.346910  450411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:03:01.359592  450411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:03:01.369173  450411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:03:01.375144  450411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:03:01.375209  450411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:03:01.421502  450411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:03:01.430993  450411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:03:01.436405  450411 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:03:01.436487  450411 kubeadm.go:400] StartCluster: {Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:03:01.436691  450411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:03:01.436818  450411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:03:01.473765  450411 cri.go:89] found id: ""
	I1017 20:03:01.473850  450411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:03:01.483250  450411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:03:01.491361  450411 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:03:01.491428  450411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:03:01.501597  450411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:03:01.501623  450411 kubeadm.go:157] found existing configuration files:
	
	I1017 20:03:01.501843  450411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:03:01.511466  450411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:03:01.511595  450411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:03:01.519424  450411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:03:01.529492  450411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:03:01.529568  450411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:03:01.539145  450411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:03:01.547045  450411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:03:01.547164  450411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:03:01.556114  450411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:03:01.566365  450411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:03:01.566477  450411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:03:01.574191  450411 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:03:01.628294  450411 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1017 20:03:01.628553  450411 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:03:01.672327  450411 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:03:01.672456  450411 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 20:03:01.672564  450411 kubeadm.go:318] OS: Linux
	I1017 20:03:01.672651  450411 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:03:01.672758  450411 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 20:03:01.672858  450411 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:03:01.672941  450411 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:03:01.673076  450411 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:03:01.673190  450411 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:03:01.673255  450411 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:03:01.673340  450411 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:03:01.673409  450411 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 20:03:01.760334  450411 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:03:01.760495  450411 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:03:01.760622  450411 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1017 20:03:01.916934  450411 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 20:03:01.922892  450411 out.go:252]   - Generating certificates and keys ...
	I1017 20:03:01.922992  450411 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:03:01.923066  450411 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:03:02.144671  450411 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:03:02.603615  450411 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:03:03.106392  450411 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:03:04.062463  450411 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:03:05.238371  450411 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:03:05.238734  450411 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-135652] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 20:03:05.544777  450411 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:03:05.544923  450411 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-135652] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 20:03:05.902524  450411 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:03:06.472545  450411 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:03:06.806975  450411 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:03:06.807285  450411 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:03:07.093624  450411 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:03:07.484384  450411 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:03:07.831616  450411 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:03:08.729377  450411 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:03:08.730472  450411 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:03:08.733433  450411 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:03:08.736949  450411 out.go:252]   - Booting up control plane ...
	I1017 20:03:08.737068  450411 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:03:08.737158  450411 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:03:08.738221  450411 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:03:08.755442  450411 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:03:08.755566  450411 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:03:08.755623  450411 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:03:08.917139  450411 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1017 20:03:16.922688  450411 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.008928 seconds
	I1017 20:03:16.923002  450411 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:03:16.937294  450411 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:03:17.469987  450411 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:03:17.470197  450411 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-135652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:03:17.983993  450411 kubeadm.go:318] [bootstrap-token] Using token: qydkye.niylcbhfpr06zb7v
	I1017 20:03:17.986867  450411 out.go:252]   - Configuring RBAC rules ...
	I1017 20:03:17.986991  450411 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:03:17.992087  450411 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:03:18.003593  450411 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:03:18.010415  450411 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:03:18.022336  450411 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:03:18.027855  450411 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:03:18.045503  450411 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:03:18.327884  450411 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:03:18.437182  450411 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:03:18.438469  450411 kubeadm.go:318] 
	I1017 20:03:18.438541  450411 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:03:18.438547  450411 kubeadm.go:318] 
	I1017 20:03:18.438632  450411 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:03:18.438638  450411 kubeadm.go:318] 
	I1017 20:03:18.438663  450411 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:03:18.438721  450411 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:03:18.438772  450411 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:03:18.438776  450411 kubeadm.go:318] 
	I1017 20:03:18.438836  450411 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:03:18.438842  450411 kubeadm.go:318] 
	I1017 20:03:18.438889  450411 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:03:18.438893  450411 kubeadm.go:318] 
	I1017 20:03:18.438945  450411 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:03:18.439018  450411 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:03:18.439085  450411 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:03:18.439108  450411 kubeadm.go:318] 
	I1017 20:03:18.439192  450411 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:03:18.439278  450411 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:03:18.439285  450411 kubeadm.go:318] 
	I1017 20:03:18.439368  450411 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token qydkye.niylcbhfpr06zb7v \
	I1017 20:03:18.439469  450411 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 \
	I1017 20:03:18.439495  450411 kubeadm.go:318] 	--control-plane 
	I1017 20:03:18.439499  450411 kubeadm.go:318] 
	I1017 20:03:18.439583  450411 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:03:18.439588  450411 kubeadm.go:318] 
	I1017 20:03:18.439674  450411 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token qydkye.niylcbhfpr06zb7v \
	I1017 20:03:18.439775  450411 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 
	I1017 20:03:18.444103  450411 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 20:03:18.444321  450411 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:03:18.444363  450411 cni.go:84] Creating CNI manager for ""
	I1017 20:03:18.444386  450411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:03:18.449376  450411 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:03:18.452584  450411 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:03:18.463460  450411 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1017 20:03:18.463481  450411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:03:18.480322  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:03:19.424504  450411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:03:19.424706  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:19.424793  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-135652 minikube.k8s.io/updated_at=2025_10_17T20_03_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=old-k8s-version-135652 minikube.k8s.io/primary=true
	I1017 20:03:19.609710  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:19.609775  450411 ops.go:34] apiserver oom_adj: -16
	I1017 20:03:20.110663  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:20.610616  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:21.110378  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:21.610479  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:22.110390  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:22.610607  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:23.110487  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:23.609797  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:24.110240  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:24.610476  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:25.109866  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:25.610509  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:26.109880  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:26.609836  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:27.109988  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:27.610676  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:28.110611  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:28.610611  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:29.110617  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:29.610074  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:30.110823  450411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:03:30.224716  450411 kubeadm.go:1113] duration metric: took 10.800053329s to wait for elevateKubeSystemPrivileges
	I1017 20:03:30.224746  450411 kubeadm.go:402] duration metric: took 28.788263015s to StartCluster
	I1017 20:03:30.224764  450411 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:03:30.224836  450411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:03:30.226037  450411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:03:30.226321  450411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:03:30.226456  450411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:03:30.226764  450411 config.go:182] Loaded profile config "old-k8s-version-135652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:03:30.226818  450411 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:03:30.226949  450411 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-135652"
	I1017 20:03:30.227003  450411 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-135652"
	I1017 20:03:30.227061  450411 host.go:66] Checking if "old-k8s-version-135652" exists ...
	I1017 20:03:30.226963  450411 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-135652"
	I1017 20:03:30.227248  450411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-135652"
	I1017 20:03:30.227667  450411 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:03:30.227799  450411 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:03:30.231040  450411 out.go:179] * Verifying Kubernetes components...
	I1017 20:03:30.234128  450411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:03:30.281057  450411 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-135652"
	I1017 20:03:30.281107  450411 host.go:66] Checking if "old-k8s-version-135652" exists ...
	I1017 20:03:30.281633  450411 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:03:30.289966  450411 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:03:30.292981  450411 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:03:30.293009  450411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:03:30.293102  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:03:30.319404  450411 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:03:30.319427  450411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:03:30.319506  450411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:03:30.337281  450411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:03:30.360026  450411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:03:30.618932  450411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:03:30.618950  450411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:03:30.688432  450411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:03:30.793867  450411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:03:31.514609  450411 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1017 20:03:31.516793  450411 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-135652" to be "Ready" ...
	I1017 20:03:31.847116  450411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.053206448s)
	I1017 20:03:31.850331  450411 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1017 20:03:31.853223  450411 addons.go:514] duration metric: took 1.626428715s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1017 20:03:32.020664  450411 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-135652" context rescaled to 1 replicas
	W1017 20:03:33.520687  450411 node_ready.go:57] node "old-k8s-version-135652" has "Ready":"False" status (will retry)
	W1017 20:03:36.020079  450411 node_ready.go:57] node "old-k8s-version-135652" has "Ready":"False" status (will retry)
	W1017 20:03:38.021153  450411 node_ready.go:57] node "old-k8s-version-135652" has "Ready":"False" status (will retry)
	W1017 20:03:40.520276  450411 node_ready.go:57] node "old-k8s-version-135652" has "Ready":"False" status (will retry)
	W1017 20:03:42.520636  450411 node_ready.go:57] node "old-k8s-version-135652" has "Ready":"False" status (will retry)
	W1017 20:03:44.520978  450411 node_ready.go:57] node "old-k8s-version-135652" has "Ready":"False" status (will retry)
	I1017 20:03:45.023196  450411 node_ready.go:49] node "old-k8s-version-135652" is "Ready"
	I1017 20:03:45.023227  450411 node_ready.go:38] duration metric: took 13.506387514s for node "old-k8s-version-135652" to be "Ready" ...
	I1017 20:03:45.023243  450411 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:03:45.023327  450411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:03:45.078184  450411 api_server.go:72] duration metric: took 14.851823866s to wait for apiserver process to appear ...
	I1017 20:03:45.078217  450411 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:03:45.078278  450411 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:03:45.087794  450411 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 20:03:45.089857  450411 api_server.go:141] control plane version: v1.28.0
	I1017 20:03:45.089886  450411 api_server.go:131] duration metric: took 11.625304ms to wait for apiserver health ...
	I1017 20:03:45.089897  450411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:03:45.096761  450411 system_pods.go:59] 8 kube-system pods found
	I1017 20:03:45.096812  450411 system_pods.go:61] "coredns-5dd5756b68-74pn6" [a9d889b2-d91c-493f-a0a8-de610e7240d5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:03:45.096823  450411 system_pods.go:61] "etcd-old-k8s-version-135652" [985d2d7b-3099-455a-9396-243cdd940ebf] Running
	I1017 20:03:45.096830  450411 system_pods.go:61] "kindnet-spvzd" [50b2e826-62cc-4853-974d-13b9ab81b802] Running
	I1017 20:03:45.096835  450411 system_pods.go:61] "kube-apiserver-old-k8s-version-135652" [9e376f4f-93e6-4ce5-ab1e-051909c3d815] Running
	I1017 20:03:45.096841  450411 system_pods.go:61] "kube-controller-manager-old-k8s-version-135652" [a0affdd9-608a-4028-b1c7-d6a2773d33f6] Running
	I1017 20:03:45.096848  450411 system_pods.go:61] "kube-proxy-5qhvs" [ca7a19b2-9842-4190-85f5-9eb4e0985eea] Running
	I1017 20:03:45.096854  450411 system_pods.go:61] "kube-scheduler-old-k8s-version-135652" [a19340fe-f4de-443e-b749-f461c5fd13bf] Running
	I1017 20:03:45.096861  450411 system_pods.go:61] "storage-provisioner" [af094a04-92d3-44b6-b662-542feecaac6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:03:45.096869  450411 system_pods.go:74] duration metric: took 6.964345ms to wait for pod list to return data ...
	I1017 20:03:45.096884  450411 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:03:45.102100  450411 default_sa.go:45] found service account: "default"
	I1017 20:03:45.102144  450411 default_sa.go:55] duration metric: took 5.251654ms for default service account to be created ...
	I1017 20:03:45.102158  450411 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:03:45.110905  450411 system_pods.go:86] 8 kube-system pods found
	I1017 20:03:45.111016  450411 system_pods.go:89] "coredns-5dd5756b68-74pn6" [a9d889b2-d91c-493f-a0a8-de610e7240d5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:03:45.111040  450411 system_pods.go:89] "etcd-old-k8s-version-135652" [985d2d7b-3099-455a-9396-243cdd940ebf] Running
	I1017 20:03:45.111087  450411 system_pods.go:89] "kindnet-spvzd" [50b2e826-62cc-4853-974d-13b9ab81b802] Running
	I1017 20:03:45.111117  450411 system_pods.go:89] "kube-apiserver-old-k8s-version-135652" [9e376f4f-93e6-4ce5-ab1e-051909c3d815] Running
	I1017 20:03:45.111144  450411 system_pods.go:89] "kube-controller-manager-old-k8s-version-135652" [a0affdd9-608a-4028-b1c7-d6a2773d33f6] Running
	I1017 20:03:45.111181  450411 system_pods.go:89] "kube-proxy-5qhvs" [ca7a19b2-9842-4190-85f5-9eb4e0985eea] Running
	I1017 20:03:45.111207  450411 system_pods.go:89] "kube-scheduler-old-k8s-version-135652" [a19340fe-f4de-443e-b749-f461c5fd13bf] Running
	I1017 20:03:45.111233  450411 system_pods.go:89] "storage-provisioner" [af094a04-92d3-44b6-b662-542feecaac6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:03:45.111295  450411 retry.go:31] will retry after 297.462428ms: missing components: kube-dns
	I1017 20:03:45.420263  450411 system_pods.go:86] 8 kube-system pods found
	I1017 20:03:45.420303  450411 system_pods.go:89] "coredns-5dd5756b68-74pn6" [a9d889b2-d91c-493f-a0a8-de610e7240d5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:03:45.420311  450411 system_pods.go:89] "etcd-old-k8s-version-135652" [985d2d7b-3099-455a-9396-243cdd940ebf] Running
	I1017 20:03:45.420318  450411 system_pods.go:89] "kindnet-spvzd" [50b2e826-62cc-4853-974d-13b9ab81b802] Running
	I1017 20:03:45.420330  450411 system_pods.go:89] "kube-apiserver-old-k8s-version-135652" [9e376f4f-93e6-4ce5-ab1e-051909c3d815] Running
	I1017 20:03:45.420336  450411 system_pods.go:89] "kube-controller-manager-old-k8s-version-135652" [a0affdd9-608a-4028-b1c7-d6a2773d33f6] Running
	I1017 20:03:45.420340  450411 system_pods.go:89] "kube-proxy-5qhvs" [ca7a19b2-9842-4190-85f5-9eb4e0985eea] Running
	I1017 20:03:45.420346  450411 system_pods.go:89] "kube-scheduler-old-k8s-version-135652" [a19340fe-f4de-443e-b749-f461c5fd13bf] Running
	I1017 20:03:45.420362  450411 system_pods.go:89] "storage-provisioner" [af094a04-92d3-44b6-b662-542feecaac6e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:03:45.420379  450411 retry.go:31] will retry after 361.794051ms: missing components: kube-dns
	I1017 20:03:45.786297  450411 system_pods.go:86] 8 kube-system pods found
	I1017 20:03:45.786324  450411 system_pods.go:89] "coredns-5dd5756b68-74pn6" [a9d889b2-d91c-493f-a0a8-de610e7240d5] Running
	I1017 20:03:45.786331  450411 system_pods.go:89] "etcd-old-k8s-version-135652" [985d2d7b-3099-455a-9396-243cdd940ebf] Running
	I1017 20:03:45.786337  450411 system_pods.go:89] "kindnet-spvzd" [50b2e826-62cc-4853-974d-13b9ab81b802] Running
	I1017 20:03:45.786342  450411 system_pods.go:89] "kube-apiserver-old-k8s-version-135652" [9e376f4f-93e6-4ce5-ab1e-051909c3d815] Running
	I1017 20:03:45.786348  450411 system_pods.go:89] "kube-controller-manager-old-k8s-version-135652" [a0affdd9-608a-4028-b1c7-d6a2773d33f6] Running
	I1017 20:03:45.786352  450411 system_pods.go:89] "kube-proxy-5qhvs" [ca7a19b2-9842-4190-85f5-9eb4e0985eea] Running
	I1017 20:03:45.786357  450411 system_pods.go:89] "kube-scheduler-old-k8s-version-135652" [a19340fe-f4de-443e-b749-f461c5fd13bf] Running
	I1017 20:03:45.786361  450411 system_pods.go:89] "storage-provisioner" [af094a04-92d3-44b6-b662-542feecaac6e] Running
	I1017 20:03:45.786385  450411 system_pods.go:126] duration metric: took 684.209391ms to wait for k8s-apps to be running ...
	I1017 20:03:45.786394  450411 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:03:45.786454  450411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:03:45.799835  450411 system_svc.go:56] duration metric: took 13.431243ms WaitForService to wait for kubelet
	I1017 20:03:45.799862  450411 kubeadm.go:586] duration metric: took 15.573508085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:03:45.799882  450411 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:03:45.802652  450411 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:03:45.802685  450411 node_conditions.go:123] node cpu capacity is 2
	I1017 20:03:45.802702  450411 node_conditions.go:105] duration metric: took 2.815266ms to run NodePressure ...
	I1017 20:03:45.802715  450411 start.go:241] waiting for startup goroutines ...
	I1017 20:03:45.802723  450411 start.go:246] waiting for cluster config update ...
	I1017 20:03:45.802734  450411 start.go:255] writing updated cluster config ...
	I1017 20:03:45.803034  450411 ssh_runner.go:195] Run: rm -f paused
	I1017 20:03:45.806629  450411 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:03:45.811562  450411 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-74pn6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:45.816868  450411 pod_ready.go:94] pod "coredns-5dd5756b68-74pn6" is "Ready"
	I1017 20:03:45.816895  450411 pod_ready.go:86] duration metric: took 5.308194ms for pod "coredns-5dd5756b68-74pn6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:45.819819  450411 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:45.824813  450411 pod_ready.go:94] pod "etcd-old-k8s-version-135652" is "Ready"
	I1017 20:03:45.824841  450411 pod_ready.go:86] duration metric: took 4.996128ms for pod "etcd-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:45.827711  450411 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:45.832290  450411 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-135652" is "Ready"
	I1017 20:03:45.832318  450411 pod_ready.go:86] duration metric: took 4.580353ms for pod "kube-apiserver-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:45.835165  450411 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:46.210529  450411 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-135652" is "Ready"
	I1017 20:03:46.210565  450411 pod_ready.go:86] duration metric: took 375.374669ms for pod "kube-controller-manager-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:46.411348  450411 pod_ready.go:83] waiting for pod "kube-proxy-5qhvs" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:46.810341  450411 pod_ready.go:94] pod "kube-proxy-5qhvs" is "Ready"
	I1017 20:03:46.810374  450411 pod_ready.go:86] duration metric: took 398.954439ms for pod "kube-proxy-5qhvs" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:47.011116  450411 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:47.411408  450411 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-135652" is "Ready"
	I1017 20:03:47.411435  450411 pod_ready.go:86] duration metric: took 400.249813ms for pod "kube-scheduler-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:03:47.411448  450411 pod_ready.go:40] duration metric: took 1.604788308s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:03:47.470723  450411 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1017 20:03:47.474019  450411 out.go:203] 
	W1017 20:03:47.476902  450411 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1017 20:03:47.479936  450411 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1017 20:03:47.483795  450411 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-135652" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:03:45 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:45.373186835Z" level=info msg="Created container 43d95deb827f34419381cb04d0443a70ea15cbdf61329b1e49e2ab4b820c5565: kube-system/coredns-5dd5756b68-74pn6/coredns" id=4d0b9df7-c589-4c05-ae71-cb30c98d64c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:03:45 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:45.374418948Z" level=info msg="Starting container: 43d95deb827f34419381cb04d0443a70ea15cbdf61329b1e49e2ab4b820c5565" id=6e9c36e8-1373-4234-a04e-ea9dae188f21 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:03:45 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:45.380993963Z" level=info msg="Started container" PID=1932 containerID=43d95deb827f34419381cb04d0443a70ea15cbdf61329b1e49e2ab4b820c5565 description=kube-system/coredns-5dd5756b68-74pn6/coredns id=6e9c36e8-1373-4234-a04e-ea9dae188f21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b56baab889c6af0e99b2e8271d419f23e3cd2a604feef99edc4edecd72928cbe
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.034977021Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c9624b69-2b9f-4c3d-a32d-390ca38d4d59 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.035068555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.040678861Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:80ac497985087d90b061e111e0f73c0fa9c6b233b07f784ddaff8d91fdd9abba UID:38081228-78de-468b-b2de-1ee71ee84cac NetNS:/var/run/netns/601220cb-eb3d-43b1-b689-042ea0babea2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000d7060}] Aliases:map[]}"
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.040850697Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.053002812Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:80ac497985087d90b061e111e0f73c0fa9c6b233b07f784ddaff8d91fdd9abba UID:38081228-78de-468b-b2de-1ee71ee84cac NetNS:/var/run/netns/601220cb-eb3d-43b1-b689-042ea0babea2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000d7060}] Aliases:map[]}"
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.053208Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.05902283Z" level=info msg="Ran pod sandbox 80ac497985087d90b061e111e0f73c0fa9c6b233b07f784ddaff8d91fdd9abba with infra container: default/busybox/POD" id=c9624b69-2b9f-4c3d-a32d-390ca38d4d59 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.060083609Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=27b8d81e-2f5f-4bb3-b890-42eee2463faf name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.060234054Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=27b8d81e-2f5f-4bb3-b890-42eee2463faf name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.060283087Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=27b8d81e-2f5f-4bb3-b890-42eee2463faf name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.063088375Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f7a7c8a2-9809-4c7c-9624-b75f7466785c name=/runtime.v1.ImageService/PullImage
	Oct 17 20:03:48 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:48.067082341Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 20:03:49 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:49.957912928Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f7a7c8a2-9809-4c7c-9624-b75f7466785c name=/runtime.v1.ImageService/PullImage
	Oct 17 20:03:49 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:49.961180424Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=33573932-79fc-486d-8f63-93500fd37e20 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:03:49 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:49.96376386Z" level=info msg="Creating container: default/busybox/busybox" id=d505f1ca-f652-4314-a1be-7a3fc77a9608 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:03:49 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:49.964675318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:03:49 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:49.969250477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:03:49 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:49.969874731Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:03:49 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:49.985850913Z" level=info msg="Created container e8dcc3bde06bfb439b58fcc863994a88893a6cd606609e7f3b2ab5be1b0a8fb1: default/busybox/busybox" id=d505f1ca-f652-4314-a1be-7a3fc77a9608 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:03:49 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:49.989800687Z" level=info msg="Starting container: e8dcc3bde06bfb439b58fcc863994a88893a6cd606609e7f3b2ab5be1b0a8fb1" id=01bc51fe-f673-44e4-9b3f-67a6d4b08ef1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:03:49 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:49.991969915Z" level=info msg="Started container" PID=1985 containerID=e8dcc3bde06bfb439b58fcc863994a88893a6cd606609e7f3b2ab5be1b0a8fb1 description=default/busybox/busybox id=01bc51fe-f673-44e4-9b3f-67a6d4b08ef1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=80ac497985087d90b061e111e0f73c0fa9c6b233b07f784ddaff8d91fdd9abba
	Oct 17 20:03:55 old-k8s-version-135652 crio[839]: time="2025-10-17T20:03:55.95082609Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	e8dcc3bde06bf       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   80ac497985087       busybox                                          default
	43d95deb827f3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   b56baab889c6a       coredns-5dd5756b68-74pn6                         kube-system
	9acacb13d4870       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   16a88fa5e46f1       storage-provisioner                              kube-system
	7c31841b8dd26       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   9c03ab301da28       kindnet-spvzd                                    kube-system
	3f4488113e7c5       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   e45711b0a2a18       kube-proxy-5qhvs                                 kube-system
	0d31dee0979ee       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      46 seconds ago      Running             kube-controller-manager   0                   5a3d4440b0f12       kube-controller-manager-old-k8s-version-135652   kube-system
	3299a0b32fd74       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      46 seconds ago      Running             kube-scheduler            0                   c054bd2db75df       kube-scheduler-old-k8s-version-135652            kube-system
	ad802ce4fc427       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      46 seconds ago      Running             etcd                      0                   ea67716eaf370       etcd-old-k8s-version-135652                      kube-system
	3983cd339d3b6       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      46 seconds ago      Running             kube-apiserver            0                   14f20392c557f       kube-apiserver-old-k8s-version-135652            kube-system
	
	
	==> coredns [43d95deb827f34419381cb04d0443a70ea15cbdf61329b1e49e2ab4b820c5565] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46642 - 60860 "HINFO IN 4291874531602877937.6623823272342745176. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013818941s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-135652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-135652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=old-k8s-version-135652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_03_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:03:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-135652
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:03:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:03:49 +0000   Fri, 17 Oct 2025 20:03:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:03:49 +0000   Fri, 17 Oct 2025 20:03:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:03:49 +0000   Fri, 17 Oct 2025 20:03:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:03:49 +0000   Fri, 17 Oct 2025 20:03:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-135652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                9cdb3944-7199-44fe-af06-5219f78e8dc9
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-74pn6                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     26s
	  kube-system                 etcd-old-k8s-version-135652                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-spvzd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-135652             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-135652    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-5qhvs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-135652             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-135652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-135652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-135652 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-135652 event: Registered Node old-k8s-version-135652 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-135652 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct17 19:34] overlayfs: idmapped layers are currently not supported
	[Oct17 19:36] overlayfs: idmapped layers are currently not supported
	[Oct17 19:41] overlayfs: idmapped layers are currently not supported
	[ +34.896999] overlayfs: idmapped layers are currently not supported
	[Oct17 19:42] overlayfs: idmapped layers are currently not supported
	[Oct17 19:43] overlayfs: idmapped layers are currently not supported
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ad802ce4fc4276107a1deb6f246be01f9449e59862bf80fd8e44d53170446dac] <==
	{"level":"info","ts":"2025-10-17T20:03:10.917064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-17T20:03:10.919307Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-17T20:03:10.921401Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T20:03:10.921572Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T20:03:10.92174Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T20:03:10.92751Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T20:03:10.927611Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T20:03:11.351504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-17T20:03:11.351608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-17T20:03:11.351659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-17T20:03:11.351699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-17T20:03:11.351735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-17T20:03:11.351771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-17T20:03:11.3518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-17T20:03:11.356702Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-135652 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T20:03:11.356795Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:03:11.357803Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T20:03:11.357931Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:03:11.358386Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:03:11.35899Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:03:11.365669Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:03:11.365755Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:03:11.366783Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-17T20:03:11.379777Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T20:03:11.380121Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:03:57 up  2:46,  0 user,  load average: 2.87, 3.19, 2.62
	Linux old-k8s-version-135652 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7c31841b8dd266563514cb5301a983329af55af46c9e97416faf43b4866d3b21] <==
	I1017 20:03:34.410529       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:03:34.410844       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:03:34.410997       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:03:34.500603       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:03:34.500727       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:03:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:03:34.701406       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:03:34.701475       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:03:34.701512       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:03:34.702166       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:03:34.902518       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:03:34.902549       1 metrics.go:72] Registering metrics
	I1017 20:03:34.902611       1 controller.go:711] "Syncing nftables rules"
	I1017 20:03:44.701275       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:03:44.701349       1 main.go:301] handling current node
	I1017 20:03:54.701940       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:03:54.701977       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3983cd339d3b68eaa30bf772e378b5fb8a985b611223ff1a94b540c3dd805c1d] <==
	I1017 20:03:15.148145       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:03:15.171376       1 shared_informer.go:318] Caches are synced for configmaps
	I1017 20:03:15.172850       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1017 20:03:15.173118       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1017 20:03:15.173745       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:03:15.177212       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 20:03:15.177302       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 20:03:15.177332       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1017 20:03:15.179534       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 20:03:15.227587       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:03:15.879032       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:03:15.884069       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:03:15.884092       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:03:16.499445       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:03:16.548874       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:03:16.615214       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:03:16.622216       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1017 20:03:16.623261       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 20:03:16.630837       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:03:17.115405       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 20:03:18.304945       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 20:03:18.326437       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:03:18.339781       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1017 20:03:30.500860       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1017 20:03:30.941664       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0d31dee0979ee641608234f6a1bdfa361461482f095ee374d2c29a44ab4c6452] <==
	I1017 20:03:30.176625       1 shared_informer.go:318] Caches are synced for disruption
	I1017 20:03:30.181528       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1017 20:03:30.183004       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 20:03:30.532496       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5qhvs"
	I1017 20:03:30.540342       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:03:30.556662       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-spvzd"
	I1017 20:03:30.563100       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:03:30.563140       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 20:03:30.955600       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1017 20:03:31.042146       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6q5tp"
	I1017 20:03:31.066634       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-74pn6"
	I1017 20:03:31.089017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="134.409675ms"
	I1017 20:03:31.168817       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.741041ms"
	I1017 20:03:31.227234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.360233ms"
	I1017 20:03:31.227342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.422µs"
	I1017 20:03:31.612037       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1017 20:03:31.673914       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-6q5tp"
	I1017 20:03:31.688825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.131474ms"
	I1017 20:03:31.711428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.073474ms"
	I1017 20:03:31.711895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="173.927µs"
	I1017 20:03:44.871645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.034µs"
	I1017 20:03:44.892971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.331µs"
	I1017 20:03:44.990159       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1017 20:03:45.619185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.507875ms"
	I1017 20:03:45.619266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.002µs"
	
	
	==> kube-proxy [3f4488113e7c503c123a80f076475d89806cf677a6533c5ac06069669ea2c7ae] <==
	I1017 20:03:31.241038       1 server_others.go:69] "Using iptables proxy"
	I1017 20:03:31.271282       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1017 20:03:31.329290       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:03:31.331151       1 server_others.go:152] "Using iptables Proxier"
	I1017 20:03:31.331239       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 20:03:31.331283       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 20:03:31.331344       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 20:03:31.331568       1 server.go:846] "Version info" version="v1.28.0"
	I1017 20:03:31.331756       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:03:31.332478       1 config.go:188] "Starting service config controller"
	I1017 20:03:31.332554       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 20:03:31.332615       1 config.go:97] "Starting endpoint slice config controller"
	I1017 20:03:31.332650       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 20:03:31.333171       1 config.go:315] "Starting node config controller"
	I1017 20:03:31.333217       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 20:03:31.433038       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1017 20:03:31.433099       1 shared_informer.go:318] Caches are synced for service config
	I1017 20:03:31.433370       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3299a0b32fd74b9e9b860fe699f7b78b025c51040b0bd77f8e1b4ed45a3f92a1] <==
	W1017 20:03:15.134573       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1017 20:03:15.134611       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1017 20:03:15.134689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1017 20:03:15.134741       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1017 20:03:15.134826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1017 20:03:15.134863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1017 20:03:15.134965       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1017 20:03:15.135016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1017 20:03:15.978483       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1017 20:03:15.978621       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:03:16.061478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1017 20:03:16.061590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1017 20:03:16.106495       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1017 20:03:16.106535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1017 20:03:16.156067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1017 20:03:16.156167       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1017 20:03:16.177844       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1017 20:03:16.177964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1017 20:03:16.180511       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1017 20:03:16.180568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1017 20:03:16.222139       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1017 20:03:16.222182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1017 20:03:16.248832       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1017 20:03:16.248883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1017 20:03:19.110191       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: I1017 20:03:30.574370    1371 topology_manager.go:215] "Topology Admit Handler" podUID="ca7a19b2-9842-4190-85f5-9eb4e0985eea" podNamespace="kube-system" podName="kube-proxy-5qhvs"
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: I1017 20:03:30.599260    1371 topology_manager.go:215] "Topology Admit Handler" podUID="50b2e826-62cc-4853-974d-13b9ab81b802" podNamespace="kube-system" podName="kindnet-spvzd"
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: I1017 20:03:30.616747    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/50b2e826-62cc-4853-974d-13b9ab81b802-cni-cfg\") pod \"kindnet-spvzd\" (UID: \"50b2e826-62cc-4853-974d-13b9ab81b802\") " pod="kube-system/kindnet-spvzd"
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: I1017 20:03:30.616799    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50b2e826-62cc-4853-974d-13b9ab81b802-xtables-lock\") pod \"kindnet-spvzd\" (UID: \"50b2e826-62cc-4853-974d-13b9ab81b802\") " pod="kube-system/kindnet-spvzd"
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: I1017 20:03:30.616825    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca7a19b2-9842-4190-85f5-9eb4e0985eea-kube-proxy\") pod \"kube-proxy-5qhvs\" (UID: \"ca7a19b2-9842-4190-85f5-9eb4e0985eea\") " pod="kube-system/kube-proxy-5qhvs"
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: I1017 20:03:30.616851    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca7a19b2-9842-4190-85f5-9eb4e0985eea-xtables-lock\") pod \"kube-proxy-5qhvs\" (UID: \"ca7a19b2-9842-4190-85f5-9eb4e0985eea\") " pod="kube-system/kube-proxy-5qhvs"
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: I1017 20:03:30.616874    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca7a19b2-9842-4190-85f5-9eb4e0985eea-lib-modules\") pod \"kube-proxy-5qhvs\" (UID: \"ca7a19b2-9842-4190-85f5-9eb4e0985eea\") " pod="kube-system/kube-proxy-5qhvs"
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: I1017 20:03:30.616902    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lcnh\" (UniqueName: \"kubernetes.io/projected/ca7a19b2-9842-4190-85f5-9eb4e0985eea-kube-api-access-7lcnh\") pod \"kube-proxy-5qhvs\" (UID: \"ca7a19b2-9842-4190-85f5-9eb4e0985eea\") " pod="kube-system/kube-proxy-5qhvs"
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: I1017 20:03:30.616924    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z29r\" (UniqueName: \"kubernetes.io/projected/50b2e826-62cc-4853-974d-13b9ab81b802-kube-api-access-2z29r\") pod \"kindnet-spvzd\" (UID: \"50b2e826-62cc-4853-974d-13b9ab81b802\") " pod="kube-system/kindnet-spvzd"
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: I1017 20:03:30.616946    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50b2e826-62cc-4853-974d-13b9ab81b802-lib-modules\") pod \"kindnet-spvzd\" (UID: \"50b2e826-62cc-4853-974d-13b9ab81b802\") " pod="kube-system/kindnet-spvzd"
	Oct 17 20:03:30 old-k8s-version-135652 kubelet[1371]: W1017 20:03:30.895341    1371 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/crio-e45711b0a2a18d307cfebeb6d329e92d47dfc68a3fb16341bb4885ae5882b38a WatchSource:0}: Error finding container e45711b0a2a18d307cfebeb6d329e92d47dfc68a3fb16341bb4885ae5882b38a: Status 404 returned error can't find the container with id e45711b0a2a18d307cfebeb6d329e92d47dfc68a3fb16341bb4885ae5882b38a
	Oct 17 20:03:34 old-k8s-version-135652 kubelet[1371]: I1017 20:03:34.567505    1371 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5qhvs" podStartSLOduration=4.567461817 podCreationTimestamp="2025-10-17 20:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:03:31.587798046 +0000 UTC m=+13.318890386" watchObservedRunningTime="2025-10-17 20:03:34.567461817 +0000 UTC m=+16.298554157"
	Oct 17 20:03:38 old-k8s-version-135652 kubelet[1371]: I1017 20:03:38.454240    1371 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-spvzd" podStartSLOduration=5.05634637 podCreationTimestamp="2025-10-17 20:03:30 +0000 UTC" firstStartedPulling="2025-10-17 20:03:30.932904453 +0000 UTC m=+12.663996793" lastFinishedPulling="2025-10-17 20:03:34.330748499 +0000 UTC m=+16.061840839" observedRunningTime="2025-10-17 20:03:34.568472078 +0000 UTC m=+16.299564434" watchObservedRunningTime="2025-10-17 20:03:38.454190416 +0000 UTC m=+20.185282764"
	Oct 17 20:03:44 old-k8s-version-135652 kubelet[1371]: I1017 20:03:44.832001    1371 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 17 20:03:44 old-k8s-version-135652 kubelet[1371]: I1017 20:03:44.869395    1371 topology_manager.go:215] "Topology Admit Handler" podUID="a9d889b2-d91c-493f-a0a8-de610e7240d5" podNamespace="kube-system" podName="coredns-5dd5756b68-74pn6"
	Oct 17 20:03:44 old-k8s-version-135652 kubelet[1371]: I1017 20:03:44.876554    1371 topology_manager.go:215] "Topology Admit Handler" podUID="af094a04-92d3-44b6-b662-542feecaac6e" podNamespace="kube-system" podName="storage-provisioner"
	Oct 17 20:03:44 old-k8s-version-135652 kubelet[1371]: I1017 20:03:44.926069    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cq7x\" (UniqueName: \"kubernetes.io/projected/a9d889b2-d91c-493f-a0a8-de610e7240d5-kube-api-access-9cq7x\") pod \"coredns-5dd5756b68-74pn6\" (UID: \"a9d889b2-d91c-493f-a0a8-de610e7240d5\") " pod="kube-system/coredns-5dd5756b68-74pn6"
	Oct 17 20:03:44 old-k8s-version-135652 kubelet[1371]: I1017 20:03:44.926120    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9d889b2-d91c-493f-a0a8-de610e7240d5-config-volume\") pod \"coredns-5dd5756b68-74pn6\" (UID: \"a9d889b2-d91c-493f-a0a8-de610e7240d5\") " pod="kube-system/coredns-5dd5756b68-74pn6"
	Oct 17 20:03:44 old-k8s-version-135652 kubelet[1371]: I1017 20:03:44.926152    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxknn\" (UniqueName: \"kubernetes.io/projected/af094a04-92d3-44b6-b662-542feecaac6e-kube-api-access-xxknn\") pod \"storage-provisioner\" (UID: \"af094a04-92d3-44b6-b662-542feecaac6e\") " pod="kube-system/storage-provisioner"
	Oct 17 20:03:44 old-k8s-version-135652 kubelet[1371]: I1017 20:03:44.926179    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/af094a04-92d3-44b6-b662-542feecaac6e-tmp\") pod \"storage-provisioner\" (UID: \"af094a04-92d3-44b6-b662-542feecaac6e\") " pod="kube-system/storage-provisioner"
	Oct 17 20:03:45 old-k8s-version-135652 kubelet[1371]: I1017 20:03:45.608726    1371 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.608678493 podCreationTimestamp="2025-10-17 20:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:03:45.59199486 +0000 UTC m=+27.323087199" watchObservedRunningTime="2025-10-17 20:03:45.608678493 +0000 UTC m=+27.339770841"
	Oct 17 20:03:47 old-k8s-version-135652 kubelet[1371]: I1017 20:03:47.732286    1371 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-74pn6" podStartSLOduration=16.732222622 podCreationTimestamp="2025-10-17 20:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:03:45.609395544 +0000 UTC m=+27.340487892" watchObservedRunningTime="2025-10-17 20:03:47.732222622 +0000 UTC m=+29.463314962"
	Oct 17 20:03:47 old-k8s-version-135652 kubelet[1371]: I1017 20:03:47.733169    1371 topology_manager.go:215] "Topology Admit Handler" podUID="38081228-78de-468b-b2de-1ee71ee84cac" podNamespace="default" podName="busybox"
	Oct 17 20:03:47 old-k8s-version-135652 kubelet[1371]: I1017 20:03:47.761188    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8twj\" (UniqueName: \"kubernetes.io/projected/38081228-78de-468b-b2de-1ee71ee84cac-kube-api-access-k8twj\") pod \"busybox\" (UID: \"38081228-78de-468b-b2de-1ee71ee84cac\") " pod="default/busybox"
	Oct 17 20:03:48 old-k8s-version-135652 kubelet[1371]: W1017 20:03:48.055250    1371 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/crio-80ac497985087d90b061e111e0f73c0fa9c6b233b07f784ddaff8d91fdd9abba WatchSource:0}: Error finding container 80ac497985087d90b061e111e0f73c0fa9c6b233b07f784ddaff8d91fdd9abba: Status 404 returned error can't find the container with id 80ac497985087d90b061e111e0f73c0fa9c6b233b07f784ddaff8d91fdd9abba
	
	
	==> storage-provisioner [9acacb13d48709b34326469566713d730fe789157aa59ac8d4d2e6742f9d830f] <==
	I1017 20:03:45.336836       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:03:45.390473       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:03:45.390566       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1017 20:03:45.427071       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:03:45.427378       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-135652_d7113777-2ea1-4c36-96db-0415ce848213!
	I1017 20:03:45.430894       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ebb79cd-89e4-4fbf-baf1-fb4d250e17dc", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-135652_d7113777-2ea1-4c36-96db-0415ce848213 became leader
	I1017 20:03:45.529621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-135652_d7113777-2ea1-4c36-96db-0415ce848213!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-135652 -n old-k8s-version-135652
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-135652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-135652 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-135652 --alsologtostderr -v=1: exit status 80 (1.997247318s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-135652 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:05:10.319223  456189 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:05:10.319471  456189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:05:10.319503  456189 out.go:374] Setting ErrFile to fd 2...
	I1017 20:05:10.319521  456189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:05:10.319817  456189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:05:10.320122  456189 out.go:368] Setting JSON to false
	I1017 20:05:10.320211  456189 mustload.go:65] Loading cluster: old-k8s-version-135652
	I1017 20:05:10.320773  456189 config.go:182] Loaded profile config "old-k8s-version-135652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:05:10.321329  456189 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:05:10.338837  456189 host.go:66] Checking if "old-k8s-version-135652" exists ...
	I1017 20:05:10.339264  456189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:05:10.407999  456189 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 20:05:10.392838878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:05:10.408945  456189 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-135652 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:05:10.412459  456189 out.go:179] * Pausing node old-k8s-version-135652 ... 
	I1017 20:05:10.416103  456189 host.go:66] Checking if "old-k8s-version-135652" exists ...
	I1017 20:05:10.416471  456189 ssh_runner.go:195] Run: systemctl --version
	I1017 20:05:10.416587  456189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:05:10.435515  456189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:05:10.540607  456189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:05:10.555101  456189 pause.go:52] kubelet running: true
	I1017 20:05:10.555171  456189 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:05:10.807881  456189 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:05:10.807993  456189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:05:10.876046  456189 cri.go:89] found id: "e2905732bd31a768f3a5cbf8925e8ba87524f0e93f091c5ef5c4eff9b2bbfea1"
	I1017 20:05:10.876070  456189 cri.go:89] found id: "3cee913666df08b4596783394b5ea5ef68e091d315d89e582c2c7c642e59ea67"
	I1017 20:05:10.876075  456189 cri.go:89] found id: "fa973d114ac945d1a893e6ca7e8c2be9fdc00ee2b43156c1e95432093ff9c4d7"
	I1017 20:05:10.876080  456189 cri.go:89] found id: "2530bf6fb2cb6db3309ea4398f8a1439523777523161e863d0aff28c3cfb7f45"
	I1017 20:05:10.876083  456189 cri.go:89] found id: "4e2070657cd73d4d62f63f2797cbc953d5b2ae8ddd88015521bd823860afa9a3"
	I1017 20:05:10.876087  456189 cri.go:89] found id: "bbdce86113a44ab36a088aa850f2a5cddb392bb495337b9a38ddedc57c767b53"
	I1017 20:05:10.876091  456189 cri.go:89] found id: "04fec30cc87f2919128db984312df9b8cd7bdc614707218a4d5892931a729287"
	I1017 20:05:10.876120  456189 cri.go:89] found id: "69a0b4952c8c39c59af5f7438198d6b6fe4e7cb0a49809e1f434fa02cf6b54db"
	I1017 20:05:10.876125  456189 cri.go:89] found id: "72b4880a4ac31a561331fc9731e3f6e0e2d06b3829e4c1ee82b157b2fe66d636"
	I1017 20:05:10.876132  456189 cri.go:89] found id: "3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff"
	I1017 20:05:10.876141  456189 cri.go:89] found id: "05a6be346e43f4a055341332d969f75e6f690027eef3065dcf733b4b45ebb9bf"
	I1017 20:05:10.876145  456189 cri.go:89] found id: ""
	I1017 20:05:10.876220  456189 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:05:10.895760  456189 retry.go:31] will retry after 198.835585ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:05:10Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:05:11.095231  456189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:05:11.109743  456189 pause.go:52] kubelet running: false
	I1017 20:05:11.109866  456189 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:05:11.286932  456189 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:05:11.287013  456189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:05:11.363940  456189 cri.go:89] found id: "e2905732bd31a768f3a5cbf8925e8ba87524f0e93f091c5ef5c4eff9b2bbfea1"
	I1017 20:05:11.363961  456189 cri.go:89] found id: "3cee913666df08b4596783394b5ea5ef68e091d315d89e582c2c7c642e59ea67"
	I1017 20:05:11.363967  456189 cri.go:89] found id: "fa973d114ac945d1a893e6ca7e8c2be9fdc00ee2b43156c1e95432093ff9c4d7"
	I1017 20:05:11.363971  456189 cri.go:89] found id: "2530bf6fb2cb6db3309ea4398f8a1439523777523161e863d0aff28c3cfb7f45"
	I1017 20:05:11.363974  456189 cri.go:89] found id: "4e2070657cd73d4d62f63f2797cbc953d5b2ae8ddd88015521bd823860afa9a3"
	I1017 20:05:11.363978  456189 cri.go:89] found id: "bbdce86113a44ab36a088aa850f2a5cddb392bb495337b9a38ddedc57c767b53"
	I1017 20:05:11.363981  456189 cri.go:89] found id: "04fec30cc87f2919128db984312df9b8cd7bdc614707218a4d5892931a729287"
	I1017 20:05:11.363984  456189 cri.go:89] found id: "69a0b4952c8c39c59af5f7438198d6b6fe4e7cb0a49809e1f434fa02cf6b54db"
	I1017 20:05:11.363988  456189 cri.go:89] found id: "72b4880a4ac31a561331fc9731e3f6e0e2d06b3829e4c1ee82b157b2fe66d636"
	I1017 20:05:11.363995  456189 cri.go:89] found id: "3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff"
	I1017 20:05:11.363998  456189 cri.go:89] found id: "05a6be346e43f4a055341332d969f75e6f690027eef3065dcf733b4b45ebb9bf"
	I1017 20:05:11.364001  456189 cri.go:89] found id: ""
	I1017 20:05:11.364056  456189 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:05:11.376168  456189 retry.go:31] will retry after 561.899809ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:05:11Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:05:11.939026  456189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:05:11.957999  456189 pause.go:52] kubelet running: false
	I1017 20:05:11.958073  456189 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:05:12.153888  456189 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:05:12.154038  456189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:05:12.223230  456189 cri.go:89] found id: "e2905732bd31a768f3a5cbf8925e8ba87524f0e93f091c5ef5c4eff9b2bbfea1"
	I1017 20:05:12.223250  456189 cri.go:89] found id: "3cee913666df08b4596783394b5ea5ef68e091d315d89e582c2c7c642e59ea67"
	I1017 20:05:12.223255  456189 cri.go:89] found id: "fa973d114ac945d1a893e6ca7e8c2be9fdc00ee2b43156c1e95432093ff9c4d7"
	I1017 20:05:12.223259  456189 cri.go:89] found id: "2530bf6fb2cb6db3309ea4398f8a1439523777523161e863d0aff28c3cfb7f45"
	I1017 20:05:12.223262  456189 cri.go:89] found id: "4e2070657cd73d4d62f63f2797cbc953d5b2ae8ddd88015521bd823860afa9a3"
	I1017 20:05:12.223266  456189 cri.go:89] found id: "bbdce86113a44ab36a088aa850f2a5cddb392bb495337b9a38ddedc57c767b53"
	I1017 20:05:12.223269  456189 cri.go:89] found id: "04fec30cc87f2919128db984312df9b8cd7bdc614707218a4d5892931a729287"
	I1017 20:05:12.223272  456189 cri.go:89] found id: "69a0b4952c8c39c59af5f7438198d6b6fe4e7cb0a49809e1f434fa02cf6b54db"
	I1017 20:05:12.223275  456189 cri.go:89] found id: "72b4880a4ac31a561331fc9731e3f6e0e2d06b3829e4c1ee82b157b2fe66d636"
	I1017 20:05:12.223281  456189 cri.go:89] found id: "3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff"
	I1017 20:05:12.223285  456189 cri.go:89] found id: "05a6be346e43f4a055341332d969f75e6f690027eef3065dcf733b4b45ebb9bf"
	I1017 20:05:12.223288  456189 cri.go:89] found id: ""
	I1017 20:05:12.223338  456189 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:05:12.238710  456189 out.go:203] 
	W1017 20:05:12.241635  456189 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:05:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:05:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:05:12.241664  456189 out.go:285] * 
	* 
	W1017 20:05:12.248326  456189 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:05:12.251435  456189 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-135652 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-135652
helpers_test.go:243: (dbg) docker inspect old-k8s-version-135652:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86",
	        "Created": "2025-10-17T20:02:51.429282597Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 454115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:04:10.938638472Z",
	            "FinishedAt": "2025-10-17T20:04:10.094661885Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/hostname",
	        "HostsPath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/hosts",
	        "LogPath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86-json.log",
	        "Name": "/old-k8s-version-135652",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-135652:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-135652",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86",
	                "LowerDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220/merged",
	                "UpperDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220/diff",
	                "WorkDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-135652",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-135652/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-135652",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-135652",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-135652",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3c10cad70c23856d2ff2451984948a51e645d50b43ca38413e94b3e2d44add8",
	            "SandboxKey": "/var/run/docker/netns/e3c10cad70c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-135652": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:13:12:53:36:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90204cc66e7ad6745643724a78275aac28eb4a09363d718713af2fa28c9cb97d",
	                    "EndpointID": "3b53c1d033a4568f10051093612594e52c27f5aac520a24dc6d0f811ba56cf99",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-135652",
	                        "b175bb475b3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135652 -n old-k8s-version-135652
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135652 -n old-k8s-version-135652: exit status 2 (361.579288ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-135652 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-135652 logs -n 25: (1.354970383s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-804622 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo containerd config dump                                                                                                                                                                                                  │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo crio config                                                                                                                                                                                                             │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ delete  │ -p cilium-804622                                                                                                                                                                                                                              │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ start   │ -p force-systemd-env-945733 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-945733  │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ force-systemd-flag-285387 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-285387 │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ delete  │ -p force-systemd-flag-285387                                                                                                                                                                                                                  │ force-systemd-flag-285387 │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164379    │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p force-systemd-env-945733                                                                                                                                                                                                                   │ force-systemd-env-945733  │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p cert-options-533238 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ cert-options-533238 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ -p cert-options-533238 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p cert-options-533238                                                                                                                                                                                                                        │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-135652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │                     │
	│ stop    │ -p old-k8s-version-135652 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │ 17 Oct 25 20:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-135652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ image   │ old-k8s-version-135652 image list --format=json                                                                                                                                                                                               │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ pause   │ -p old-k8s-version-135652 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:04:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:04:10.636994  453991 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:04:10.637120  453991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:04:10.637132  453991 out.go:374] Setting ErrFile to fd 2...
	I1017 20:04:10.637138  453991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:04:10.637436  453991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:04:10.637809  453991 out.go:368] Setting JSON to false
	I1017 20:04:10.638769  453991 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10002,"bootTime":1760721449,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:04:10.638850  453991 start.go:141] virtualization:  
	I1017 20:04:10.642211  453991 out.go:179] * [old-k8s-version-135652] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:04:10.646029  453991 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:04:10.646075  453991 notify.go:220] Checking for updates...
	I1017 20:04:10.651971  453991 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:04:10.654813  453991 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:04:10.657722  453991 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:04:10.660661  453991 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:04:10.663579  453991 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:04:10.667097  453991 config.go:182] Loaded profile config "old-k8s-version-135652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:04:10.670539  453991 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1017 20:04:10.673272  453991 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:04:10.705914  453991 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:04:10.706037  453991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:04:10.783140  453991 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:04:10.773124678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:04:10.783253  453991 docker.go:318] overlay module found
	I1017 20:04:10.786323  453991 out.go:179] * Using the docker driver based on existing profile
	I1017 20:04:10.789136  453991 start.go:305] selected driver: docker
	I1017 20:04:10.789158  453991 start.go:925] validating driver "docker" against &{Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:04:10.789253  453991 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:04:10.790104  453991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:04:10.854609  453991 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:04:10.843837424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:04:10.854947  453991 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:04:10.854981  453991 cni.go:84] Creating CNI manager for ""
	I1017 20:04:10.855038  453991 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:04:10.855079  453991 start.go:349] cluster config:
	{Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:04:10.858337  453991 out.go:179] * Starting "old-k8s-version-135652" primary control-plane node in "old-k8s-version-135652" cluster
	I1017 20:04:10.861186  453991 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:04:10.864079  453991 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:04:10.866884  453991 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 20:04:10.866937  453991 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1017 20:04:10.866950  453991 cache.go:58] Caching tarball of preloaded images
	I1017 20:04:10.866985  453991 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:04:10.867032  453991 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:04:10.867041  453991 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1017 20:04:10.867159  453991 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/config.json ...
	I1017 20:04:10.887494  453991 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:04:10.887521  453991 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:04:10.887540  453991 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:04:10.887563  453991 start.go:360] acquireMachinesLock for old-k8s-version-135652: {Name:mkb7e5198ce4bb901f93d40f8941ec8842fd8eb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:04:10.887639  453991 start.go:364] duration metric: took 50.271µs to acquireMachinesLock for "old-k8s-version-135652"
	I1017 20:04:10.887667  453991 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:04:10.887678  453991 fix.go:54] fixHost starting: 
	I1017 20:04:10.887974  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:10.905919  453991 fix.go:112] recreateIfNeeded on old-k8s-version-135652: state=Stopped err=<nil>
	W1017 20:04:10.905963  453991 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:04:10.909128  453991 out.go:252] * Restarting existing docker container for "old-k8s-version-135652" ...
	I1017 20:04:10.909234  453991 cli_runner.go:164] Run: docker start old-k8s-version-135652
	I1017 20:04:11.157916  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:11.181085  453991 kic.go:430] container "old-k8s-version-135652" state is running.
	I1017 20:04:11.181654  453991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135652
	I1017 20:04:11.206506  453991 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/config.json ...
	I1017 20:04:11.206723  453991 machine.go:93] provisionDockerMachine start ...
	I1017 20:04:11.206778  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:11.230227  453991 main.go:141] libmachine: Using SSH client type: native
	I1017 20:04:11.230545  453991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I1017 20:04:11.230555  453991 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:04:11.231236  453991 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:04:14.384179  453991 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-135652
	
	I1017 20:04:14.384212  453991 ubuntu.go:182] provisioning hostname "old-k8s-version-135652"
	I1017 20:04:14.384273  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:14.402217  453991 main.go:141] libmachine: Using SSH client type: native
	I1017 20:04:14.402528  453991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I1017 20:04:14.402545  453991 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-135652 && echo "old-k8s-version-135652" | sudo tee /etc/hostname
	I1017 20:04:14.563070  453991 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-135652
	
	I1017 20:04:14.563161  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:14.586320  453991 main.go:141] libmachine: Using SSH client type: native
	I1017 20:04:14.586622  453991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I1017 20:04:14.586684  453991 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-135652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-135652/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-135652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:04:14.732688  453991 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:04:14.732716  453991 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:04:14.732743  453991 ubuntu.go:190] setting up certificates
	I1017 20:04:14.732752  453991 provision.go:84] configureAuth start
	I1017 20:04:14.732812  453991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135652
	I1017 20:04:14.750954  453991 provision.go:143] copyHostCerts
	I1017 20:04:14.751031  453991 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:04:14.751045  453991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:04:14.751121  453991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:04:14.751239  453991 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:04:14.751251  453991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:04:14.751280  453991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:04:14.751349  453991 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:04:14.751364  453991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:04:14.751391  453991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:04:14.751449  453991 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-135652 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-135652]
	I1017 20:04:14.905919  453991 provision.go:177] copyRemoteCerts
	I1017 20:04:14.905986  453991 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:04:14.906025  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:14.924682  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:15.030223  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:04:15.050137  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1017 20:04:15.067897  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:04:15.086004  453991 provision.go:87] duration metric: took 353.226887ms to configureAuth
	I1017 20:04:15.086030  453991 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:04:15.086241  453991 config.go:182] Loaded profile config "old-k8s-version-135652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:04:15.086344  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:15.105876  453991 main.go:141] libmachine: Using SSH client type: native
	I1017 20:04:15.106185  453991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I1017 20:04:15.106208  453991 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:04:15.424718  453991 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:04:15.424745  453991 machine.go:96] duration metric: took 4.218012846s to provisionDockerMachine
	I1017 20:04:15.424755  453991 start.go:293] postStartSetup for "old-k8s-version-135652" (driver="docker")
	I1017 20:04:15.424766  453991 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:04:15.424830  453991 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:04:15.424872  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:15.447593  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:15.554286  453991 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:04:15.558104  453991 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:04:15.558133  453991 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:04:15.558145  453991 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:04:15.558207  453991 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:04:15.558290  453991 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:04:15.558399  453991 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:04:15.566014  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:04:15.583019  453991 start.go:296] duration metric: took 158.248872ms for postStartSetup
	I1017 20:04:15.583114  453991 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:04:15.583165  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:15.607890  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:15.713493  453991 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:04:15.718995  453991 fix.go:56] duration metric: took 4.831309373s for fixHost
	I1017 20:04:15.719018  453991 start.go:83] releasing machines lock for "old-k8s-version-135652", held for 4.831362936s
	I1017 20:04:15.719126  453991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135652
	I1017 20:04:15.745148  453991 ssh_runner.go:195] Run: cat /version.json
	I1017 20:04:15.745193  453991 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:04:15.745204  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:15.745247  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:15.768623  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:15.784940  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:15.966230  453991 ssh_runner.go:195] Run: systemctl --version
	I1017 20:04:15.973167  453991 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:04:16.016706  453991 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:04:16.022262  453991 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:04:16.022392  453991 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:04:16.031016  453991 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:04:16.031057  453991 start.go:495] detecting cgroup driver to use...
	I1017 20:04:16.031111  453991 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:04:16.031180  453991 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:04:16.048373  453991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:04:16.061756  453991 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:04:16.061837  453991 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:04:16.077983  453991 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:04:16.091236  453991 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:04:16.208012  453991 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:04:16.334299  453991 docker.go:234] disabling docker service ...
	I1017 20:04:16.334452  453991 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:04:16.351982  453991 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:04:16.367324  453991 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:04:16.489763  453991 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:04:16.608663  453991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:04:16.621751  453991 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:04:16.635287  453991 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1017 20:04:16.635393  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.644134  453991 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:04:16.644247  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.653553  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.662212  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.671029  453991 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:04:16.678871  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.687717  453991 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.696160  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.705720  453991 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:04:16.713157  453991 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:04:16.725678  453991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:04:16.847051  453991 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:04:16.987167  453991 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:04:16.987256  453991 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:04:16.991824  453991 start.go:563] Will wait 60s for crictl version
	I1017 20:04:16.991888  453991 ssh_runner.go:195] Run: which crictl
	I1017 20:04:16.995579  453991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:04:17.026330  453991 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:04:17.026437  453991 ssh_runner.go:195] Run: crio --version
	I1017 20:04:17.060174  453991 ssh_runner.go:195] Run: crio --version
	I1017 20:04:17.095279  453991 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1017 20:04:17.098085  453991 cli_runner.go:164] Run: docker network inspect old-k8s-version-135652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:04:17.115430  453991 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 20:04:17.119581  453991 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:04:17.128996  453991 kubeadm.go:883] updating cluster {Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:04:17.129121  453991 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 20:04:17.129177  453991 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:04:17.165017  453991 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:04:17.165041  453991 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:04:17.165103  453991 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:04:17.191498  453991 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:04:17.191522  453991 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:04:17.191531  453991 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1017 20:04:17.191636  453991 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-135652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:04:17.191718  453991 ssh_runner.go:195] Run: crio config
	I1017 20:04:17.245706  453991 cni.go:84] Creating CNI manager for ""
	I1017 20:04:17.245732  453991 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:04:17.245750  453991 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:04:17.245780  453991 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-135652 NodeName:old-k8s-version-135652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:04:17.245928  453991 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-135652"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:04:17.246011  453991 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1017 20:04:17.253664  453991 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:04:17.253743  453991 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:04:17.261062  453991 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1017 20:04:17.274517  453991 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:04:17.288311  453991 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1017 20:04:17.301100  453991 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:04:17.304419  453991 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:04:17.313847  453991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:04:17.432193  453991 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:04:17.448181  453991 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652 for IP: 192.168.76.2
	I1017 20:04:17.448251  453991 certs.go:195] generating shared ca certs ...
	I1017 20:04:17.448282  453991 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:04:17.448453  453991 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:04:17.448561  453991 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:04:17.448592  453991 certs.go:257] generating profile certs ...
	I1017 20:04:17.448729  453991 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.key
	I1017 20:04:17.448839  453991 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.key.7915436e
	I1017 20:04:17.448913  453991 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.key
	I1017 20:04:17.449066  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:04:17.449136  453991 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:04:17.449163  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:04:17.449218  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:04:17.449271  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:04:17.449326  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:04:17.449399  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:04:17.450112  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:04:17.471125  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:04:17.492047  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:04:17.514162  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:04:17.534399  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1017 20:04:17.555888  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:04:17.578451  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:04:17.602501  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:04:17.630430  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:04:17.663052  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:04:17.683189  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:04:17.705550  453991 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:04:17.727322  453991 ssh_runner.go:195] Run: openssl version
	I1017 20:04:17.734509  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:04:17.744037  453991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:04:17.747840  453991 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:04:17.747934  453991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:04:17.791729  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:04:17.799977  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:04:17.808095  453991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:04:17.811934  453991 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:04:17.812001  453991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:04:17.853167  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:04:17.861180  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:04:17.869543  453991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:04:17.873536  453991 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:04:17.873635  453991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:04:17.915406  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:04:17.923478  453991 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:04:17.927393  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:04:17.968378  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:04:18.009712  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:04:18.051890  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:04:18.104780  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:04:18.154162  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:04:18.229716  453991 kubeadm.go:400] StartCluster: {Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:04:18.229831  453991 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:04:18.229916  453991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:04:18.309058  453991 cri.go:89] found id: "bbdce86113a44ab36a088aa850f2a5cddb392bb495337b9a38ddedc57c767b53"
	I1017 20:04:18.309094  453991 cri.go:89] found id: "04fec30cc87f2919128db984312df9b8cd7bdc614707218a4d5892931a729287"
	I1017 20:04:18.309100  453991 cri.go:89] found id: "69a0b4952c8c39c59af5f7438198d6b6fe4e7cb0a49809e1f434fa02cf6b54db"
	I1017 20:04:18.309113  453991 cri.go:89] found id: "72b4880a4ac31a561331fc9731e3f6e0e2d06b3829e4c1ee82b157b2fe66d636"
	I1017 20:04:18.309116  453991 cri.go:89] found id: ""
	I1017 20:04:18.309171  453991 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:04:18.329855  453991 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:04:18Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:04:18.329960  453991 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:04:18.342430  453991 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:04:18.342466  453991 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:04:18.342517  453991 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:04:18.353813  453991 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:04:18.354481  453991 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-135652" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:04:18.354834  453991 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-135652" cluster setting kubeconfig missing "old-k8s-version-135652" context setting]
	I1017 20:04:18.355386  453991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:04:18.357225  453991 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:04:18.367738  453991 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 20:04:18.367782  453991 kubeadm.go:601] duration metric: took 25.308093ms to restartPrimaryControlPlane
	I1017 20:04:18.367793  453991 kubeadm.go:402] duration metric: took 138.088617ms to StartCluster
	I1017 20:04:18.367811  453991 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:04:18.367881  453991 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:04:18.368879  453991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:04:18.369116  453991 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:04:18.369505  453991 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:04:18.369583  453991 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-135652"
	I1017 20:04:18.369601  453991 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-135652"
	W1017 20:04:18.369608  453991 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:04:18.369629  453991 host.go:66] Checking if "old-k8s-version-135652" exists ...
	I1017 20:04:18.370051  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:18.370744  453991 config.go:182] Loaded profile config "old-k8s-version-135652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:04:18.370843  453991 addons.go:69] Setting dashboard=true in profile "old-k8s-version-135652"
	I1017 20:04:18.370876  453991 addons.go:238] Setting addon dashboard=true in "old-k8s-version-135652"
	W1017 20:04:18.370907  453991 addons.go:247] addon dashboard should already be in state true
	I1017 20:04:18.370950  453991 host.go:66] Checking if "old-k8s-version-135652" exists ...
	I1017 20:04:18.371447  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:18.372172  453991 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-135652"
	I1017 20:04:18.372202  453991 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-135652"
	I1017 20:04:18.372508  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:18.376550  453991 out.go:179] * Verifying Kubernetes components...
	I1017 20:04:18.388585  453991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:04:18.421705  453991 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-135652"
	W1017 20:04:18.421736  453991 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:04:18.421761  453991 host.go:66] Checking if "old-k8s-version-135652" exists ...
	I1017 20:04:18.422164  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:18.448214  453991 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:04:18.448285  453991 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 20:04:18.450620  453991 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:04:18.450643  453991 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:04:18.450705  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:18.452948  453991 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:04:18.452971  453991 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:04:18.453027  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:18.456498  453991 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 20:04:18.459468  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 20:04:18.459494  453991 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 20:04:18.459562  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:18.516471  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:18.519028  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:18.522030  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:18.698013  453991 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:04:18.709629  453991 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:04:18.769236  453991 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:04:18.797569  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 20:04:18.797643  453991 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 20:04:18.859589  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 20:04:18.859682  453991 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 20:04:18.935144  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 20:04:18.935215  453991 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 20:04:19.016667  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 20:04:19.016727  453991 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 20:04:19.042590  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 20:04:19.042665  453991 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 20:04:19.069701  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 20:04:19.069772  453991 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 20:04:19.087871  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 20:04:19.087940  453991 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 20:04:19.111707  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 20:04:19.111780  453991 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 20:04:19.132354  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:04:19.132426  453991 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 20:04:19.156249  453991 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:04:24.362207  453991 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.664145728s)
	I1017 20:04:24.362264  453991 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.6525699s)
	I1017 20:04:24.362296  453991 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-135652" to be "Ready" ...
	I1017 20:04:24.362587  453991 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.593289945s)
	I1017 20:04:24.396812  453991 node_ready.go:49] node "old-k8s-version-135652" is "Ready"
	I1017 20:04:24.396886  453991 node_ready.go:38] duration metric: took 34.569499ms for node "old-k8s-version-135652" to be "Ready" ...
	I1017 20:04:24.396925  453991 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:04:24.397007  453991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:04:24.914555  453991 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.758216788s)
	I1017 20:04:24.914797  453991 api_server.go:72] duration metric: took 6.545648036s to wait for apiserver process to appear ...
	I1017 20:04:24.914830  453991 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:04:24.914855  453991 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:04:24.917851  453991 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-135652 addons enable metrics-server
	
	I1017 20:04:24.920855  453991 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1017 20:04:24.923764  453991 addons.go:514] duration metric: took 6.554246895s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1017 20:04:24.928499  453991 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 20:04:24.930152  453991 api_server.go:141] control plane version: v1.28.0
	I1017 20:04:24.930226  453991 api_server.go:131] duration metric: took 15.381466ms to wait for apiserver health ...
	I1017 20:04:24.930261  453991 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:04:24.938672  453991 system_pods.go:59] 8 kube-system pods found
	I1017 20:04:24.938710  453991 system_pods.go:61] "coredns-5dd5756b68-74pn6" [a9d889b2-d91c-493f-a0a8-de610e7240d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:04:24.938723  453991 system_pods.go:61] "etcd-old-k8s-version-135652" [985d2d7b-3099-455a-9396-243cdd940ebf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:04:24.938729  453991 system_pods.go:61] "kindnet-spvzd" [50b2e826-62cc-4853-974d-13b9ab81b802] Running
	I1017 20:04:24.938736  453991 system_pods.go:61] "kube-apiserver-old-k8s-version-135652" [9e376f4f-93e6-4ce5-ab1e-051909c3d815] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:04:24.938743  453991 system_pods.go:61] "kube-controller-manager-old-k8s-version-135652" [a0affdd9-608a-4028-b1c7-d6a2773d33f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:04:24.938748  453991 system_pods.go:61] "kube-proxy-5qhvs" [ca7a19b2-9842-4190-85f5-9eb4e0985eea] Running
	I1017 20:04:24.938756  453991 system_pods.go:61] "kube-scheduler-old-k8s-version-135652" [a19340fe-f4de-443e-b749-f461c5fd13bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:04:24.938760  453991 system_pods.go:61] "storage-provisioner" [af094a04-92d3-44b6-b662-542feecaac6e] Running
	I1017 20:04:24.938767  453991 system_pods.go:74] duration metric: took 8.487069ms to wait for pod list to return data ...
	I1017 20:04:24.938774  453991 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:04:24.941677  453991 default_sa.go:45] found service account: "default"
	I1017 20:04:24.941696  453991 default_sa.go:55] duration metric: took 2.916236ms for default service account to be created ...
	I1017 20:04:24.941704  453991 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:04:24.945547  453991 system_pods.go:86] 8 kube-system pods found
	I1017 20:04:24.945632  453991 system_pods.go:89] "coredns-5dd5756b68-74pn6" [a9d889b2-d91c-493f-a0a8-de610e7240d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:04:24.945658  453991 system_pods.go:89] "etcd-old-k8s-version-135652" [985d2d7b-3099-455a-9396-243cdd940ebf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:04:24.945680  453991 system_pods.go:89] "kindnet-spvzd" [50b2e826-62cc-4853-974d-13b9ab81b802] Running
	I1017 20:04:24.945722  453991 system_pods.go:89] "kube-apiserver-old-k8s-version-135652" [9e376f4f-93e6-4ce5-ab1e-051909c3d815] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:04:24.945750  453991 system_pods.go:89] "kube-controller-manager-old-k8s-version-135652" [a0affdd9-608a-4028-b1c7-d6a2773d33f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:04:24.945775  453991 system_pods.go:89] "kube-proxy-5qhvs" [ca7a19b2-9842-4190-85f5-9eb4e0985eea] Running
	I1017 20:04:24.945815  453991 system_pods.go:89] "kube-scheduler-old-k8s-version-135652" [a19340fe-f4de-443e-b749-f461c5fd13bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:04:24.945837  453991 system_pods.go:89] "storage-provisioner" [af094a04-92d3-44b6-b662-542feecaac6e] Running
	I1017 20:04:24.945861  453991 system_pods.go:126] duration metric: took 4.151483ms to wait for k8s-apps to be running ...
	I1017 20:04:24.945899  453991 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:04:24.945982  453991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:04:24.961396  453991 system_svc.go:56] duration metric: took 15.487802ms WaitForService to wait for kubelet
	I1017 20:04:24.961475  453991 kubeadm.go:586] duration metric: took 6.592325688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:04:24.961511  453991 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:04:24.965991  453991 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:04:24.966073  453991 node_conditions.go:123] node cpu capacity is 2
	I1017 20:04:24.966102  453991 node_conditions.go:105] duration metric: took 4.569588ms to run NodePressure ...
	I1017 20:04:24.966129  453991 start.go:241] waiting for startup goroutines ...
	I1017 20:04:24.966159  453991 start.go:246] waiting for cluster config update ...
	I1017 20:04:24.966199  453991 start.go:255] writing updated cluster config ...
	I1017 20:04:24.966535  453991 ssh_runner.go:195] Run: rm -f paused
	I1017 20:04:24.970549  453991 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:04:24.975578  453991 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-74pn6" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:04:26.981431  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:29.480812  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:31.481873  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:33.482153  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:35.981513  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:37.982652  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:39.983060  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:42.481181  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:44.482457  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:46.484535  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:48.982253  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:51.487437  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:53.985510  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:55.986377  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	I1017 20:04:56.982355  453991 pod_ready.go:94] pod "coredns-5dd5756b68-74pn6" is "Ready"
	I1017 20:04:56.982379  453991 pod_ready.go:86] duration metric: took 32.006722316s for pod "coredns-5dd5756b68-74pn6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:56.985449  453991 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:56.991164  453991 pod_ready.go:94] pod "etcd-old-k8s-version-135652" is "Ready"
	I1017 20:04:56.991196  453991 pod_ready.go:86] duration metric: took 5.721639ms for pod "etcd-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:56.994387  453991 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:56.999199  453991 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-135652" is "Ready"
	I1017 20:04:56.999226  453991 pod_ready.go:86] duration metric: took 4.810567ms for pod "kube-apiserver-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:57.003987  453991 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:57.180473  453991 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-135652" is "Ready"
	I1017 20:04:57.180502  453991 pod_ready.go:86] duration metric: took 176.484933ms for pod "kube-controller-manager-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:57.380476  453991 pod_ready.go:83] waiting for pod "kube-proxy-5qhvs" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:57.780217  453991 pod_ready.go:94] pod "kube-proxy-5qhvs" is "Ready"
	I1017 20:04:57.780248  453991 pod_ready.go:86] duration metric: took 399.739267ms for pod "kube-proxy-5qhvs" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:57.981117  453991 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:58.379766  453991 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-135652" is "Ready"
	I1017 20:04:58.379795  453991 pod_ready.go:86] duration metric: took 398.650091ms for pod "kube-scheduler-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:58.379807  453991 pod_ready.go:40] duration metric: took 33.409175647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:04:58.434433  453991 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1017 20:04:58.437748  453991 out.go:203] 
	W1017 20:04:58.440749  453991 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1017 20:04:58.443770  453991 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1017 20:04:58.446769  453991 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-135652" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.650562654Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d3eddd87-ba93-4bc8-b4d2-29fbd31e97be name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.651913977Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=58a50707-2a1e-4c09-a505-4c51a9d83253 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.652965468Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr/dashboard-metrics-scraper" id=cb38142f-de3c-4103-87c8-9852451302fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.653195927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.663264335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.66421182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.6852713Z" level=info msg="Created container 3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr/dashboard-metrics-scraper" id=cb38142f-de3c-4103-87c8-9852451302fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.686135416Z" level=info msg="Starting container: 3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff" id=2f208142-1e36-46a0-a99c-706d3c679160 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.687869826Z" level=info msg="Started container" PID=1632 containerID=3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr/dashboard-metrics-scraper id=2f208142-1e36-46a0-a99c-706d3c679160 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f989dc473ae32fbd8c5406c4fee9929ba69b41ff68c1efa7a93134a7f74e186
	Oct 17 20:04:57 old-k8s-version-135652 conmon[1630]: conmon 3f559ef86315b771de0d <ninfo>: container 1632 exited with status 1
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.836285052Z" level=info msg="Removing container: 53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda" id=06cc7e46-7c5f-4af6-8ef3-025d743de5de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.844741302Z" level=info msg="Error loading conmon cgroup of container 53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda: cgroup deleted" id=06cc7e46-7c5f-4af6-8ef3-025d743de5de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.847762872Z" level=info msg="Removed container 53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr/dashboard-metrics-scraper" id=06cc7e46-7c5f-4af6-8ef3-025d743de5de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.467691615Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.472144095Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.472192438Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.472234513Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.47563012Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.475664236Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.475685552Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.479324057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.479360306Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.479383042Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.482470374Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.482499247Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	3f559ef86315b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   1f989dc473ae3       dashboard-metrics-scraper-5f989dc9cf-f5dzr       kubernetes-dashboard
	e2905732bd31a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   146ec4e8af493       storage-provisioner                              kube-system
	05a6be346e43f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   26 seconds ago      Running             kubernetes-dashboard        0                   56e6e1fa3d201       kubernetes-dashboard-8694d4445c-xwfgw            kubernetes-dashboard
	3cee913666df0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           49 seconds ago      Running             coredns                     1                   71499930ff666       coredns-5dd5756b68-74pn6                         kube-system
	61b71350a49b4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   439f31896c045       busybox                                          default
	fa973d114ac94       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   146ec4e8af493       storage-provisioner                              kube-system
	2530bf6fb2cb6       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           49 seconds ago      Running             kube-proxy                  1                   bab20b7b0337c       kube-proxy-5qhvs                                 kube-system
	4e2070657cd73       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   575c72f67fe82       kindnet-spvzd                                    kube-system
	bbdce86113a44       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           55 seconds ago      Running             kube-apiserver              1                   90fba20466d45       kube-apiserver-old-k8s-version-135652            kube-system
	04fec30cc87f2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           55 seconds ago      Running             etcd                        1                   d615897286357       etcd-old-k8s-version-135652                      kube-system
	69a0b4952c8c3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           55 seconds ago      Running             kube-scheduler              1                   c8268f48ce56e       kube-scheduler-old-k8s-version-135652            kube-system
	72b4880a4ac31       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           55 seconds ago      Running             kube-controller-manager     1                   3d5848268ef45       kube-controller-manager-old-k8s-version-135652   kube-system
	
	
	==> coredns [3cee913666df08b4596783394b5ea5ef68e091d315d89e582c2c7c642e59ea67] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46846 - 56541 "HINFO IN 8234386990561194953.2354623737495795273. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012352521s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-135652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-135652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=old-k8s-version-135652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_03_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:03:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-135652
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:05:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:04:53 +0000   Fri, 17 Oct 2025 20:03:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:04:53 +0000   Fri, 17 Oct 2025 20:03:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:04:53 +0000   Fri, 17 Oct 2025 20:03:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:04:53 +0000   Fri, 17 Oct 2025 20:03:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-135652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                9cdb3944-7199-44fe-af06-5219f78e8dc9
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-74pn6                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     102s
	  kube-system                 etcd-old-k8s-version-135652                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-spvzd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-135652             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-old-k8s-version-135652    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-5qhvs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-135652             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f5dzr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-xwfgw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node old-k8s-version-135652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node old-k8s-version-135652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node old-k8s-version-135652 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node old-k8s-version-135652 event: Registered Node old-k8s-version-135652 in Controller
	  Normal  NodeReady                89s                kubelet          Node old-k8s-version-135652 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node old-k8s-version-135652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node old-k8s-version-135652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node old-k8s-version-135652 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                node-controller  Node old-k8s-version-135652 event: Registered Node old-k8s-version-135652 in Controller
	
	
	==> dmesg <==
	[Oct17 19:36] overlayfs: idmapped layers are currently not supported
	[Oct17 19:41] overlayfs: idmapped layers are currently not supported
	[ +34.896999] overlayfs: idmapped layers are currently not supported
	[Oct17 19:42] overlayfs: idmapped layers are currently not supported
	[Oct17 19:43] overlayfs: idmapped layers are currently not supported
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [04fec30cc87f2919128db984312df9b8cd7bdc614707218a4d5892931a729287] <==
	{"level":"info","ts":"2025-10-17T20:04:18.492037Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-17T20:04:18.496723Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T20:04:18.4973Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T20:04:18.496886Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:04:18.497393Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:04:18.497434Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:04:18.497523Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T20:04:18.497188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-17T20:04:18.497668Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-17T20:04:18.497785Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:04:18.497838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:04:19.652572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-17T20:04:19.652678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-17T20:04:19.652718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-17T20:04:19.652755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-17T20:04:19.652789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-17T20:04:19.652826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-17T20:04:19.652859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-17T20:04:19.654118Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-135652 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T20:04:19.654306Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:04:19.655254Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T20:04:19.664373Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:04:19.665469Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-17T20:04:19.665948Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T20:04:19.665998Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:05:13 up  2:47,  0 user,  load average: 1.68, 2.79, 2.53
	Linux old-k8s-version-135652 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e2070657cd73d4d62f63f2797cbc953d5b2ae8ddd88015521bd823860afa9a3] <==
	I1017 20:04:24.306377       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:04:24.306550       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:04:24.306665       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:04:24.306676       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:04:24.306689       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:04:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:04:24.502018       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:04:24.502042       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:04:24.502051       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:04:24.502551       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:04:54.461960       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 20:04:54.502567       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 20:04:54.502760       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:04:54.502913       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1017 20:04:55.902184       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:04:55.902211       1 metrics.go:72] Registering metrics
	I1017 20:04:55.902278       1 controller.go:711] "Syncing nftables rules"
	I1017 20:05:04.467372       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:05:04.467427       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bbdce86113a44ab36a088aa850f2a5cddb392bb495337b9a38ddedc57c767b53] <==
	I1017 20:04:22.928779       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:04:22.957285       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 20:04:22.978352       1 shared_informer.go:318] Caches are synced for configmaps
	I1017 20:04:22.978468       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1017 20:04:22.980327       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1017 20:04:22.980422       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:04:22.986502       1 aggregator.go:166] initial CRD sync complete...
	I1017 20:04:22.986625       1 autoregister_controller.go:141] Starting autoregister controller
	I1017 20:04:22.986657       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:04:22.986689       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:04:22.987931       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1017 20:04:23.006097       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 20:04:23.006233       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1017 20:04:23.013471       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:04:23.603301       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:04:24.654704       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 20:04:24.745714       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 20:04:24.783563       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:04:24.807082       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:04:24.819850       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 20:04:24.878006       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.67.242"}
	I1017 20:04:24.906170       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.166.185"}
	I1017 20:04:35.954555       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 20:04:36.255670       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1017 20:04:36.469113       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [72b4880a4ac31a561331fc9731e3f6e0e2d06b3829e4c1ee82b157b2fe66d636] <==
	I1017 20:04:36.317126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="370.451719ms"
	I1017 20:04:36.317604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.556µs"
	I1017 20:04:36.323197       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-xwfgw"
	I1017 20:04:36.323332       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-f5dzr"
	I1017 20:04:36.344215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.938048ms"
	I1017 20:04:36.350968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="90.324628ms"
	I1017 20:04:36.357375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.112475ms"
	I1017 20:04:36.357973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="69.339µs"
	I1017 20:04:36.387037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.960624ms"
	I1017 20:04:36.387233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.847µs"
	I1017 20:04:36.399435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.882µs"
	I1017 20:04:36.480073       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1017 20:04:36.480251       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1017 20:04:36.503840       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:04:36.544136       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:04:36.544167       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 20:04:41.791816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="176.758µs"
	I1017 20:04:42.796718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.683µs"
	I1017 20:04:46.661540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.023µs"
	I1017 20:04:47.820600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.947327ms"
	I1017 20:04:47.821850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="36.906µs"
	I1017 20:04:56.683900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.195197ms"
	I1017 20:04:56.684004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.714µs"
	I1017 20:04:57.842116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.603µs"
	I1017 20:05:06.647587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.967µs"
	
	
	==> kube-proxy [2530bf6fb2cb6db3309ea4398f8a1439523777523161e863d0aff28c3cfb7f45] <==
	I1017 20:04:24.483647       1 server_others.go:69] "Using iptables proxy"
	I1017 20:04:24.498529       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1017 20:04:24.636456       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:04:24.648650       1 server_others.go:152] "Using iptables Proxier"
	I1017 20:04:24.648685       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 20:04:24.648696       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 20:04:24.648718       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 20:04:24.648916       1 server.go:846] "Version info" version="v1.28.0"
	I1017 20:04:24.648926       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:04:24.650110       1 config.go:188] "Starting service config controller"
	I1017 20:04:24.650120       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 20:04:24.650136       1 config.go:97] "Starting endpoint slice config controller"
	I1017 20:04:24.650139       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 20:04:24.657636       1 config.go:315] "Starting node config controller"
	I1017 20:04:24.664804       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 20:04:24.751098       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1017 20:04:24.751229       1 shared_informer.go:318] Caches are synced for service config
	I1017 20:04:24.765197       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [69a0b4952c8c39c59af5f7438198d6b6fe4e7cb0a49809e1f434fa02cf6b54db] <==
	I1017 20:04:22.909448       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1017 20:04:22.912619       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1017 20:04:22.916757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1017 20:04:22.916876       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1017 20:04:22.927732       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1017 20:04:22.927775       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:04:22.931903       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1017 20:04:22.931977       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1017 20:04:22.932060       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1017 20:04:22.932080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1017 20:04:22.932165       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1017 20:04:22.932181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1017 20:04:22.932244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1017 20:04:22.932259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1017 20:04:22.932314       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1017 20:04:22.932328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1017 20:04:22.932397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1017 20:04:22.932411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1017 20:04:22.932466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1017 20:04:22.932481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1017 20:04:22.932546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1017 20:04:22.932562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1017 20:04:22.933094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1017 20:04:22.933173       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1017 20:04:24.209866       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: I1017 20:04:36.340942     768 topology_manager.go:215] "Topology Admit Handler" podUID="cc2416cb-c5d5-48c6-870f-828b378c0b23" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-xwfgw"
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: I1017 20:04:36.403816     768 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnckj\" (UniqueName: \"kubernetes.io/projected/ba9a7e68-9a98-4197-b4d2-4e7e495d58ae-kube-api-access-rnckj\") pod \"dashboard-metrics-scraper-5f989dc9cf-f5dzr\" (UID: \"ba9a7e68-9a98-4197-b4d2-4e7e495d58ae\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr"
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: I1017 20:04:36.404094     768 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlw65\" (UniqueName: \"kubernetes.io/projected/cc2416cb-c5d5-48c6-870f-828b378c0b23-kube-api-access-qlw65\") pod \"kubernetes-dashboard-8694d4445c-xwfgw\" (UID: \"cc2416cb-c5d5-48c6-870f-828b378c0b23\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xwfgw"
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: I1017 20:04:36.404220     768 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ba9a7e68-9a98-4197-b4d2-4e7e495d58ae-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-f5dzr\" (UID: \"ba9a7e68-9a98-4197-b4d2-4e7e495d58ae\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr"
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: I1017 20:04:36.404343     768 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cc2416cb-c5d5-48c6-870f-828b378c0b23-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-xwfgw\" (UID: \"cc2416cb-c5d5-48c6-870f-828b378c0b23\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xwfgw"
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: W1017 20:04:36.665197     768 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/crio-1f989dc473ae32fbd8c5406c4fee9929ba69b41ff68c1efa7a93134a7f74e186 WatchSource:0}: Error finding container 1f989dc473ae32fbd8c5406c4fee9929ba69b41ff68c1efa7a93134a7f74e186: Status 404 returned error can't find the container with id 1f989dc473ae32fbd8c5406c4fee9929ba69b41ff68c1efa7a93134a7f74e186
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: W1017 20:04:36.681406     768 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/crio-56e6e1fa3d20198e21fff890cad23fef0bf8f1daaa5f26f7af72b2dc8bb01084 WatchSource:0}: Error finding container 56e6e1fa3d20198e21fff890cad23fef0bf8f1daaa5f26f7af72b2dc8bb01084: Status 404 returned error can't find the container with id 56e6e1fa3d20198e21fff890cad23fef0bf8f1daaa5f26f7af72b2dc8bb01084
	Oct 17 20:04:41 old-k8s-version-135652 kubelet[768]: I1017 20:04:41.775862     768 scope.go:117] "RemoveContainer" containerID="fc234a63e6b0e6253e1670b8dad649256d1ad3150fa75fa8870814b817c88aee"
	Oct 17 20:04:42 old-k8s-version-135652 kubelet[768]: I1017 20:04:42.780508     768 scope.go:117] "RemoveContainer" containerID="fc234a63e6b0e6253e1670b8dad649256d1ad3150fa75fa8870814b817c88aee"
	Oct 17 20:04:42 old-k8s-version-135652 kubelet[768]: I1017 20:04:42.780783     768 scope.go:117] "RemoveContainer" containerID="53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda"
	Oct 17 20:04:42 old-k8s-version-135652 kubelet[768]: E1017 20:04:42.781065     768 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f5dzr_kubernetes-dashboard(ba9a7e68-9a98-4197-b4d2-4e7e495d58ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr" podUID="ba9a7e68-9a98-4197-b4d2-4e7e495d58ae"
	Oct 17 20:04:46 old-k8s-version-135652 kubelet[768]: I1017 20:04:46.633940     768 scope.go:117] "RemoveContainer" containerID="53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda"
	Oct 17 20:04:46 old-k8s-version-135652 kubelet[768]: E1017 20:04:46.634272     768 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f5dzr_kubernetes-dashboard(ba9a7e68-9a98-4197-b4d2-4e7e495d58ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr" podUID="ba9a7e68-9a98-4197-b4d2-4e7e495d58ae"
	Oct 17 20:04:47 old-k8s-version-135652 kubelet[768]: I1017 20:04:47.809638     768 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xwfgw" podStartSLOduration=1.4919803489999999 podCreationTimestamp="2025-10-17 20:04:36 +0000 UTC" firstStartedPulling="2025-10-17 20:04:36.685576652 +0000 UTC m=+19.232972064" lastFinishedPulling="2025-10-17 20:04:47.003168686 +0000 UTC m=+29.550564106" observedRunningTime="2025-10-17 20:04:47.80952614 +0000 UTC m=+30.356921552" watchObservedRunningTime="2025-10-17 20:04:47.809572391 +0000 UTC m=+30.356967795"
	Oct 17 20:04:54 old-k8s-version-135652 kubelet[768]: I1017 20:04:54.814104     768 scope.go:117] "RemoveContainer" containerID="fa973d114ac945d1a893e6ca7e8c2be9fdc00ee2b43156c1e95432093ff9c4d7"
	Oct 17 20:04:57 old-k8s-version-135652 kubelet[768]: I1017 20:04:57.649557     768 scope.go:117] "RemoveContainer" containerID="53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda"
	Oct 17 20:04:57 old-k8s-version-135652 kubelet[768]: I1017 20:04:57.824996     768 scope.go:117] "RemoveContainer" containerID="53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda"
	Oct 17 20:04:57 old-k8s-version-135652 kubelet[768]: I1017 20:04:57.825269     768 scope.go:117] "RemoveContainer" containerID="3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff"
	Oct 17 20:04:57 old-k8s-version-135652 kubelet[768]: E1017 20:04:57.825531     768 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f5dzr_kubernetes-dashboard(ba9a7e68-9a98-4197-b4d2-4e7e495d58ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr" podUID="ba9a7e68-9a98-4197-b4d2-4e7e495d58ae"
	Oct 17 20:05:06 old-k8s-version-135652 kubelet[768]: I1017 20:05:06.634204     768 scope.go:117] "RemoveContainer" containerID="3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff"
	Oct 17 20:05:06 old-k8s-version-135652 kubelet[768]: E1017 20:05:06.634505     768 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f5dzr_kubernetes-dashboard(ba9a7e68-9a98-4197-b4d2-4e7e495d58ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr" podUID="ba9a7e68-9a98-4197-b4d2-4e7e495d58ae"
	Oct 17 20:05:10 old-k8s-version-135652 kubelet[768]: I1017 20:05:10.747902     768 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 17 20:05:10 old-k8s-version-135652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:05:10 old-k8s-version-135652 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:05:10 old-k8s-version-135652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [05a6be346e43f4a055341332d969f75e6f690027eef3065dcf733b4b45ebb9bf] <==
	2025/10/17 20:04:47 Using namespace: kubernetes-dashboard
	2025/10/17 20:04:47 Using in-cluster config to connect to apiserver
	2025/10/17 20:04:47 Using secret token for csrf signing
	2025/10/17 20:04:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:04:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:04:47 Successful initial request to the apiserver, version: v1.28.0
	2025/10/17 20:04:47 Generating JWE encryption key
	2025/10/17 20:04:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:04:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:04:47 Initializing JWE encryption key from synchronized object
	2025/10/17 20:04:47 Creating in-cluster Sidecar client
	2025/10/17 20:04:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:04:47 Serving insecurely on HTTP port: 9090
	2025/10/17 20:04:47 Starting overwatch
	
	
	==> storage-provisioner [e2905732bd31a768f3a5cbf8925e8ba87524f0e93f091c5ef5c4eff9b2bbfea1] <==
	I1017 20:04:54.866333       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:04:54.878633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:04:54.878773       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1017 20:05:12.282657       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:05:12.284630       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-135652_6a5a4fc4-f618-4440-b940-9b44c7d2b495!
	I1017 20:05:12.290321       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ebb79cd-89e4-4fbf-baf1-fb4d250e17dc", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-135652_6a5a4fc4-f618-4440-b940-9b44c7d2b495 became leader
	I1017 20:05:12.385460       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-135652_6a5a4fc4-f618-4440-b940-9b44c7d2b495!
	
	
	==> storage-provisioner [fa973d114ac945d1a893e6ca7e8c2be9fdc00ee2b43156c1e95432093ff9c4d7] <==
	I1017 20:04:24.287917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:04:54.293843       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-135652 -n old-k8s-version-135652
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-135652 -n old-k8s-version-135652: exit status 2 (364.113029ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-135652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-135652
helpers_test.go:243: (dbg) docker inspect old-k8s-version-135652:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86",
	        "Created": "2025-10-17T20:02:51.429282597Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 454115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:04:10.938638472Z",
	            "FinishedAt": "2025-10-17T20:04:10.094661885Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/hostname",
	        "HostsPath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/hosts",
	        "LogPath": "/var/lib/docker/containers/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86-json.log",
	        "Name": "/old-k8s-version-135652",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-135652:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-135652",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86",
	                "LowerDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220/merged",
	                "UpperDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220/diff",
	                "WorkDir": "/var/lib/docker/overlay2/844484687bbb53beb93db63caed98fbb47e8945606d42c727f327a603cd08220/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-135652",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-135652/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-135652",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-135652",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-135652",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3c10cad70c23856d2ff2451984948a51e645d50b43ca38413e94b3e2d44add8",
	            "SandboxKey": "/var/run/docker/netns/e3c10cad70c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-135652": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:13:12:53:36:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90204cc66e7ad6745643724a78275aac28eb4a09363d718713af2fa28c9cb97d",
	                    "EndpointID": "3b53c1d033a4568f10051093612594e52c27f5aac520a24dc6d0f811ba56cf99",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-135652",
	                        "b175bb475b3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135652 -n old-k8s-version-135652
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135652 -n old-k8s-version-135652: exit status 2 (340.628466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-135652 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-135652 logs -n 25: (1.322704067s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-804622 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo containerd config dump                                                                                                                                                                                                  │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ -p cilium-804622 sudo crio config                                                                                                                                                                                                             │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ delete  │ -p cilium-804622                                                                                                                                                                                                                              │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ start   │ -p force-systemd-env-945733 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-945733  │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ force-systemd-flag-285387 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-285387 │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ delete  │ -p force-systemd-flag-285387                                                                                                                                                                                                                  │ force-systemd-flag-285387 │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164379    │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p force-systemd-env-945733                                                                                                                                                                                                                   │ force-systemd-env-945733  │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p cert-options-533238 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ cert-options-533238 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ -p cert-options-533238 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p cert-options-533238                                                                                                                                                                                                                        │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-135652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │                     │
	│ stop    │ -p old-k8s-version-135652 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │ 17 Oct 25 20:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-135652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ image   │ old-k8s-version-135652 image list --format=json                                                                                                                                                                                               │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ pause   │ -p old-k8s-version-135652 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:04:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:04:10.636994  453991 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:04:10.637120  453991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:04:10.637132  453991 out.go:374] Setting ErrFile to fd 2...
	I1017 20:04:10.637138  453991 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:04:10.637436  453991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:04:10.637809  453991 out.go:368] Setting JSON to false
	I1017 20:04:10.638769  453991 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10002,"bootTime":1760721449,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:04:10.638850  453991 start.go:141] virtualization:  
	I1017 20:04:10.642211  453991 out.go:179] * [old-k8s-version-135652] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:04:10.646029  453991 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:04:10.646075  453991 notify.go:220] Checking for updates...
	I1017 20:04:10.651971  453991 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:04:10.654813  453991 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:04:10.657722  453991 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:04:10.660661  453991 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:04:10.663579  453991 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:04:10.667097  453991 config.go:182] Loaded profile config "old-k8s-version-135652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:04:10.670539  453991 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1017 20:04:10.673272  453991 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:04:10.705914  453991 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:04:10.706037  453991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:04:10.783140  453991 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:04:10.773124678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:04:10.783253  453991 docker.go:318] overlay module found
	I1017 20:04:10.786323  453991 out.go:179] * Using the docker driver based on existing profile
	I1017 20:04:10.789136  453991 start.go:305] selected driver: docker
	I1017 20:04:10.789158  453991 start.go:925] validating driver "docker" against &{Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:04:10.789253  453991 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:04:10.790104  453991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:04:10.854609  453991 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:04:10.843837424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:04:10.854947  453991 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:04:10.854981  453991 cni.go:84] Creating CNI manager for ""
	I1017 20:04:10.855038  453991 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:04:10.855079  453991 start.go:349] cluster config:
	{Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:04:10.858337  453991 out.go:179] * Starting "old-k8s-version-135652" primary control-plane node in "old-k8s-version-135652" cluster
	I1017 20:04:10.861186  453991 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:04:10.864079  453991 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:04:10.866884  453991 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 20:04:10.866937  453991 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1017 20:04:10.866950  453991 cache.go:58] Caching tarball of preloaded images
	I1017 20:04:10.866985  453991 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:04:10.867032  453991 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:04:10.867041  453991 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1017 20:04:10.867159  453991 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/config.json ...
	I1017 20:04:10.887494  453991 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:04:10.887521  453991 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:04:10.887540  453991 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:04:10.887563  453991 start.go:360] acquireMachinesLock for old-k8s-version-135652: {Name:mkb7e5198ce4bb901f93d40f8941ec8842fd8eb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:04:10.887639  453991 start.go:364] duration metric: took 50.271µs to acquireMachinesLock for "old-k8s-version-135652"
	I1017 20:04:10.887667  453991 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:04:10.887678  453991 fix.go:54] fixHost starting: 
	I1017 20:04:10.887974  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:10.905919  453991 fix.go:112] recreateIfNeeded on old-k8s-version-135652: state=Stopped err=<nil>
	W1017 20:04:10.905963  453991 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:04:10.909128  453991 out.go:252] * Restarting existing docker container for "old-k8s-version-135652" ...
	I1017 20:04:10.909234  453991 cli_runner.go:164] Run: docker start old-k8s-version-135652
	I1017 20:04:11.157916  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:11.181085  453991 kic.go:430] container "old-k8s-version-135652" state is running.
	I1017 20:04:11.181654  453991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135652
	I1017 20:04:11.206506  453991 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/config.json ...
	I1017 20:04:11.206723  453991 machine.go:93] provisionDockerMachine start ...
	I1017 20:04:11.206778  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:11.230227  453991 main.go:141] libmachine: Using SSH client type: native
	I1017 20:04:11.230545  453991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I1017 20:04:11.230555  453991 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:04:11.231236  453991 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:04:14.384179  453991 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-135652
	
	I1017 20:04:14.384212  453991 ubuntu.go:182] provisioning hostname "old-k8s-version-135652"
	I1017 20:04:14.384273  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:14.402217  453991 main.go:141] libmachine: Using SSH client type: native
	I1017 20:04:14.402528  453991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I1017 20:04:14.402545  453991 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-135652 && echo "old-k8s-version-135652" | sudo tee /etc/hostname
	I1017 20:04:14.563070  453991 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-135652
	
	I1017 20:04:14.563161  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:14.586320  453991 main.go:141] libmachine: Using SSH client type: native
	I1017 20:04:14.586622  453991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I1017 20:04:14.586684  453991 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-135652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-135652/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-135652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:04:14.732688  453991 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:04:14.732716  453991 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:04:14.732743  453991 ubuntu.go:190] setting up certificates
	I1017 20:04:14.732752  453991 provision.go:84] configureAuth start
	I1017 20:04:14.732812  453991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135652
	I1017 20:04:14.750954  453991 provision.go:143] copyHostCerts
	I1017 20:04:14.751031  453991 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:04:14.751045  453991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:04:14.751121  453991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:04:14.751239  453991 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:04:14.751251  453991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:04:14.751280  453991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:04:14.751349  453991 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:04:14.751364  453991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:04:14.751391  453991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:04:14.751449  453991 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-135652 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-135652]
	I1017 20:04:14.905919  453991 provision.go:177] copyRemoteCerts
	I1017 20:04:14.905986  453991 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:04:14.906025  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:14.924682  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:15.030223  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:04:15.050137  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1017 20:04:15.067897  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:04:15.086004  453991 provision.go:87] duration metric: took 353.226887ms to configureAuth
	I1017 20:04:15.086030  453991 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:04:15.086241  453991 config.go:182] Loaded profile config "old-k8s-version-135652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:04:15.086344  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:15.105876  453991 main.go:141] libmachine: Using SSH client type: native
	I1017 20:04:15.106185  453991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33414 <nil> <nil>}
	I1017 20:04:15.106208  453991 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:04:15.424718  453991 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:04:15.424745  453991 machine.go:96] duration metric: took 4.218012846s to provisionDockerMachine
	I1017 20:04:15.424755  453991 start.go:293] postStartSetup for "old-k8s-version-135652" (driver="docker")
	I1017 20:04:15.424766  453991 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:04:15.424830  453991 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:04:15.424872  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:15.447593  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:15.554286  453991 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:04:15.558104  453991 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:04:15.558133  453991 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:04:15.558145  453991 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:04:15.558207  453991 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:04:15.558290  453991 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:04:15.558399  453991 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:04:15.566014  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:04:15.583019  453991 start.go:296] duration metric: took 158.248872ms for postStartSetup
	I1017 20:04:15.583114  453991 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:04:15.583165  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:15.607890  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:15.713493  453991 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:04:15.718995  453991 fix.go:56] duration metric: took 4.831309373s for fixHost
	I1017 20:04:15.719018  453991 start.go:83] releasing machines lock for "old-k8s-version-135652", held for 4.831362936s
	I1017 20:04:15.719126  453991 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135652
	I1017 20:04:15.745148  453991 ssh_runner.go:195] Run: cat /version.json
	I1017 20:04:15.745193  453991 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:04:15.745204  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:15.745247  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:15.768623  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:15.784940  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:15.966230  453991 ssh_runner.go:195] Run: systemctl --version
	I1017 20:04:15.973167  453991 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:04:16.016706  453991 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:04:16.022262  453991 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:04:16.022392  453991 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:04:16.031016  453991 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:04:16.031057  453991 start.go:495] detecting cgroup driver to use...
	I1017 20:04:16.031111  453991 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:04:16.031180  453991 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:04:16.048373  453991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:04:16.061756  453991 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:04:16.061837  453991 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:04:16.077983  453991 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:04:16.091236  453991 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:04:16.208012  453991 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:04:16.334299  453991 docker.go:234] disabling docker service ...
	I1017 20:04:16.334452  453991 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:04:16.351982  453991 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:04:16.367324  453991 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:04:16.489763  453991 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:04:16.608663  453991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:04:16.621751  453991 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:04:16.635287  453991 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1017 20:04:16.635393  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.644134  453991 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:04:16.644247  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.653553  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.662212  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.671029  453991 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:04:16.678871  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.687717  453991 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.696160  453991 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:04:16.705720  453991 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:04:16.713157  453991 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:04:16.725678  453991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:04:16.847051  453991 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:04:16.987167  453991 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:04:16.987256  453991 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:04:16.991824  453991 start.go:563] Will wait 60s for crictl version
	I1017 20:04:16.991888  453991 ssh_runner.go:195] Run: which crictl
	I1017 20:04:16.995579  453991 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:04:17.026330  453991 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:04:17.026437  453991 ssh_runner.go:195] Run: crio --version
	I1017 20:04:17.060174  453991 ssh_runner.go:195] Run: crio --version
	I1017 20:04:17.095279  453991 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1017 20:04:17.098085  453991 cli_runner.go:164] Run: docker network inspect old-k8s-version-135652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:04:17.115430  453991 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 20:04:17.119581  453991 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:04:17.128996  453991 kubeadm.go:883] updating cluster {Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:04:17.129121  453991 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 20:04:17.129177  453991 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:04:17.165017  453991 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:04:17.165041  453991 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:04:17.165103  453991 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:04:17.191498  453991 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:04:17.191522  453991 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:04:17.191531  453991 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1017 20:04:17.191636  453991 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-135652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:04:17.191718  453991 ssh_runner.go:195] Run: crio config
	I1017 20:04:17.245706  453991 cni.go:84] Creating CNI manager for ""
	I1017 20:04:17.245732  453991 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:04:17.245750  453991 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:04:17.245780  453991 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-135652 NodeName:old-k8s-version-135652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:04:17.245928  453991 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-135652"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:04:17.246011  453991 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1017 20:04:17.253664  453991 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:04:17.253743  453991 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:04:17.261062  453991 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1017 20:04:17.274517  453991 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:04:17.288311  453991 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1017 20:04:17.301100  453991 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:04:17.304419  453991 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:04:17.313847  453991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:04:17.432193  453991 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:04:17.448181  453991 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652 for IP: 192.168.76.2
	I1017 20:04:17.448251  453991 certs.go:195] generating shared ca certs ...
	I1017 20:04:17.448282  453991 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:04:17.448453  453991 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:04:17.448561  453991 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:04:17.448592  453991 certs.go:257] generating profile certs ...
	I1017 20:04:17.448729  453991 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.key
	I1017 20:04:17.448839  453991 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.key.7915436e
	I1017 20:04:17.448913  453991 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.key
	I1017 20:04:17.449066  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:04:17.449136  453991 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:04:17.449163  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:04:17.449218  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:04:17.449271  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:04:17.449326  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:04:17.449399  453991 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:04:17.450112  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:04:17.471125  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:04:17.492047  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:04:17.514162  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:04:17.534399  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1017 20:04:17.555888  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:04:17.578451  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:04:17.602501  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:04:17.630430  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:04:17.663052  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:04:17.683189  453991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:04:17.705550  453991 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:04:17.727322  453991 ssh_runner.go:195] Run: openssl version
	I1017 20:04:17.734509  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:04:17.744037  453991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:04:17.747840  453991 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:04:17.747934  453991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:04:17.791729  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:04:17.799977  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:04:17.808095  453991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:04:17.811934  453991 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:04:17.812001  453991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:04:17.853167  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:04:17.861180  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:04:17.869543  453991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:04:17.873536  453991 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:04:17.873635  453991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:04:17.915406  453991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:04:17.923478  453991 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:04:17.927393  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:04:17.968378  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:04:18.009712  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:04:18.051890  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:04:18.104780  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:04:18.154162  453991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:04:18.229716  453991 kubeadm.go:400] StartCluster: {Name:old-k8s-version-135652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-135652 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:04:18.229831  453991 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:04:18.229916  453991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:04:18.309058  453991 cri.go:89] found id: "bbdce86113a44ab36a088aa850f2a5cddb392bb495337b9a38ddedc57c767b53"
	I1017 20:04:18.309094  453991 cri.go:89] found id: "04fec30cc87f2919128db984312df9b8cd7bdc614707218a4d5892931a729287"
	I1017 20:04:18.309100  453991 cri.go:89] found id: "69a0b4952c8c39c59af5f7438198d6b6fe4e7cb0a49809e1f434fa02cf6b54db"
	I1017 20:04:18.309113  453991 cri.go:89] found id: "72b4880a4ac31a561331fc9731e3f6e0e2d06b3829e4c1ee82b157b2fe66d636"
	I1017 20:04:18.309116  453991 cri.go:89] found id: ""
	I1017 20:04:18.309171  453991 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:04:18.329855  453991 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:04:18Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:04:18.329960  453991 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:04:18.342430  453991 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:04:18.342466  453991 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:04:18.342517  453991 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:04:18.353813  453991 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:04:18.354481  453991 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-135652" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:04:18.354834  453991 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-135652" cluster setting kubeconfig missing "old-k8s-version-135652" context setting]
	I1017 20:04:18.355386  453991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:04:18.357225  453991 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:04:18.367738  453991 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 20:04:18.367782  453991 kubeadm.go:601] duration metric: took 25.308093ms to restartPrimaryControlPlane
	I1017 20:04:18.367793  453991 kubeadm.go:402] duration metric: took 138.088617ms to StartCluster
	I1017 20:04:18.367811  453991 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:04:18.367881  453991 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:04:18.368879  453991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:04:18.369116  453991 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:04:18.369505  453991 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:04:18.369583  453991 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-135652"
	I1017 20:04:18.369601  453991 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-135652"
	W1017 20:04:18.369608  453991 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:04:18.369629  453991 host.go:66] Checking if "old-k8s-version-135652" exists ...
	I1017 20:04:18.370051  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:18.370744  453991 config.go:182] Loaded profile config "old-k8s-version-135652": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 20:04:18.370843  453991 addons.go:69] Setting dashboard=true in profile "old-k8s-version-135652"
	I1017 20:04:18.370876  453991 addons.go:238] Setting addon dashboard=true in "old-k8s-version-135652"
	W1017 20:04:18.370907  453991 addons.go:247] addon dashboard should already be in state true
	I1017 20:04:18.370950  453991 host.go:66] Checking if "old-k8s-version-135652" exists ...
	I1017 20:04:18.371447  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:18.372172  453991 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-135652"
	I1017 20:04:18.372202  453991 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-135652"
	I1017 20:04:18.372508  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:18.376550  453991 out.go:179] * Verifying Kubernetes components...
	I1017 20:04:18.388585  453991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:04:18.421705  453991 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-135652"
	W1017 20:04:18.421736  453991 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:04:18.421761  453991 host.go:66] Checking if "old-k8s-version-135652" exists ...
	I1017 20:04:18.422164  453991 cli_runner.go:164] Run: docker container inspect old-k8s-version-135652 --format={{.State.Status}}
	I1017 20:04:18.448214  453991 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:04:18.448285  453991 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 20:04:18.450620  453991 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:04:18.450643  453991 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:04:18.450705  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:18.452948  453991 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:04:18.452971  453991 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:04:18.453027  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:18.456498  453991 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 20:04:18.459468  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 20:04:18.459494  453991 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 20:04:18.459562  453991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135652
	I1017 20:04:18.516471  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:18.519028  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:18.522030  453991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33414 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/old-k8s-version-135652/id_rsa Username:docker}
	I1017 20:04:18.698013  453991 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:04:18.709629  453991 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:04:18.769236  453991 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:04:18.797569  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 20:04:18.797643  453991 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 20:04:18.859589  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 20:04:18.859682  453991 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 20:04:18.935144  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 20:04:18.935215  453991 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 20:04:19.016667  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 20:04:19.016727  453991 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 20:04:19.042590  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 20:04:19.042665  453991 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 20:04:19.069701  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 20:04:19.069772  453991 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 20:04:19.087871  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 20:04:19.087940  453991 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 20:04:19.111707  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 20:04:19.111780  453991 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 20:04:19.132354  453991 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:04:19.132426  453991 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 20:04:19.156249  453991 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:04:24.362207  453991 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.664145728s)
	I1017 20:04:24.362264  453991 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.6525699s)
	I1017 20:04:24.362296  453991 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-135652" to be "Ready" ...
	I1017 20:04:24.362587  453991 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.593289945s)
	I1017 20:04:24.396812  453991 node_ready.go:49] node "old-k8s-version-135652" is "Ready"
	I1017 20:04:24.396886  453991 node_ready.go:38] duration metric: took 34.569499ms for node "old-k8s-version-135652" to be "Ready" ...
	I1017 20:04:24.396925  453991 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:04:24.397007  453991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:04:24.914555  453991 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.758216788s)
	I1017 20:04:24.914797  453991 api_server.go:72] duration metric: took 6.545648036s to wait for apiserver process to appear ...
	I1017 20:04:24.914830  453991 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:04:24.914855  453991 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:04:24.917851  453991 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-135652 addons enable metrics-server
	
	I1017 20:04:24.920855  453991 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1017 20:04:24.923764  453991 addons.go:514] duration metric: took 6.554246895s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1017 20:04:24.928499  453991 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 20:04:24.930152  453991 api_server.go:141] control plane version: v1.28.0
	I1017 20:04:24.930226  453991 api_server.go:131] duration metric: took 15.381466ms to wait for apiserver health ...
	I1017 20:04:24.930261  453991 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:04:24.938672  453991 system_pods.go:59] 8 kube-system pods found
	I1017 20:04:24.938710  453991 system_pods.go:61] "coredns-5dd5756b68-74pn6" [a9d889b2-d91c-493f-a0a8-de610e7240d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:04:24.938723  453991 system_pods.go:61] "etcd-old-k8s-version-135652" [985d2d7b-3099-455a-9396-243cdd940ebf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:04:24.938729  453991 system_pods.go:61] "kindnet-spvzd" [50b2e826-62cc-4853-974d-13b9ab81b802] Running
	I1017 20:04:24.938736  453991 system_pods.go:61] "kube-apiserver-old-k8s-version-135652" [9e376f4f-93e6-4ce5-ab1e-051909c3d815] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:04:24.938743  453991 system_pods.go:61] "kube-controller-manager-old-k8s-version-135652" [a0affdd9-608a-4028-b1c7-d6a2773d33f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:04:24.938748  453991 system_pods.go:61] "kube-proxy-5qhvs" [ca7a19b2-9842-4190-85f5-9eb4e0985eea] Running
	I1017 20:04:24.938756  453991 system_pods.go:61] "kube-scheduler-old-k8s-version-135652" [a19340fe-f4de-443e-b749-f461c5fd13bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:04:24.938760  453991 system_pods.go:61] "storage-provisioner" [af094a04-92d3-44b6-b662-542feecaac6e] Running
	I1017 20:04:24.938767  453991 system_pods.go:74] duration metric: took 8.487069ms to wait for pod list to return data ...
	I1017 20:04:24.938774  453991 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:04:24.941677  453991 default_sa.go:45] found service account: "default"
	I1017 20:04:24.941696  453991 default_sa.go:55] duration metric: took 2.916236ms for default service account to be created ...
	I1017 20:04:24.941704  453991 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:04:24.945547  453991 system_pods.go:86] 8 kube-system pods found
	I1017 20:04:24.945632  453991 system_pods.go:89] "coredns-5dd5756b68-74pn6" [a9d889b2-d91c-493f-a0a8-de610e7240d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:04:24.945658  453991 system_pods.go:89] "etcd-old-k8s-version-135652" [985d2d7b-3099-455a-9396-243cdd940ebf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:04:24.945680  453991 system_pods.go:89] "kindnet-spvzd" [50b2e826-62cc-4853-974d-13b9ab81b802] Running
	I1017 20:04:24.945722  453991 system_pods.go:89] "kube-apiserver-old-k8s-version-135652" [9e376f4f-93e6-4ce5-ab1e-051909c3d815] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:04:24.945750  453991 system_pods.go:89] "kube-controller-manager-old-k8s-version-135652" [a0affdd9-608a-4028-b1c7-d6a2773d33f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:04:24.945775  453991 system_pods.go:89] "kube-proxy-5qhvs" [ca7a19b2-9842-4190-85f5-9eb4e0985eea] Running
	I1017 20:04:24.945815  453991 system_pods.go:89] "kube-scheduler-old-k8s-version-135652" [a19340fe-f4de-443e-b749-f461c5fd13bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:04:24.945837  453991 system_pods.go:89] "storage-provisioner" [af094a04-92d3-44b6-b662-542feecaac6e] Running
	I1017 20:04:24.945861  453991 system_pods.go:126] duration metric: took 4.151483ms to wait for k8s-apps to be running ...
	I1017 20:04:24.945899  453991 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:04:24.945982  453991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:04:24.961396  453991 system_svc.go:56] duration metric: took 15.487802ms WaitForService to wait for kubelet
	I1017 20:04:24.961475  453991 kubeadm.go:586] duration metric: took 6.592325688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:04:24.961511  453991 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:04:24.965991  453991 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:04:24.966073  453991 node_conditions.go:123] node cpu capacity is 2
	I1017 20:04:24.966102  453991 node_conditions.go:105] duration metric: took 4.569588ms to run NodePressure ...
	I1017 20:04:24.966129  453991 start.go:241] waiting for startup goroutines ...
	I1017 20:04:24.966159  453991 start.go:246] waiting for cluster config update ...
	I1017 20:04:24.966199  453991 start.go:255] writing updated cluster config ...
	I1017 20:04:24.966535  453991 ssh_runner.go:195] Run: rm -f paused
	I1017 20:04:24.970549  453991 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:04:24.975578  453991 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-74pn6" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:04:26.981431  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:29.480812  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:31.481873  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:33.482153  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:35.981513  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:37.982652  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:39.983060  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:42.481181  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:44.482457  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:46.484535  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:48.982253  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:51.487437  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:53.985510  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	W1017 20:04:55.986377  453991 pod_ready.go:104] pod "coredns-5dd5756b68-74pn6" is not "Ready", error: <nil>
	I1017 20:04:56.982355  453991 pod_ready.go:94] pod "coredns-5dd5756b68-74pn6" is "Ready"
	I1017 20:04:56.982379  453991 pod_ready.go:86] duration metric: took 32.006722316s for pod "coredns-5dd5756b68-74pn6" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:56.985449  453991 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:56.991164  453991 pod_ready.go:94] pod "etcd-old-k8s-version-135652" is "Ready"
	I1017 20:04:56.991196  453991 pod_ready.go:86] duration metric: took 5.721639ms for pod "etcd-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:56.994387  453991 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:56.999199  453991 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-135652" is "Ready"
	I1017 20:04:56.999226  453991 pod_ready.go:86] duration metric: took 4.810567ms for pod "kube-apiserver-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:57.003987  453991 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:57.180473  453991 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-135652" is "Ready"
	I1017 20:04:57.180502  453991 pod_ready.go:86] duration metric: took 176.484933ms for pod "kube-controller-manager-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:57.380476  453991 pod_ready.go:83] waiting for pod "kube-proxy-5qhvs" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:57.780217  453991 pod_ready.go:94] pod "kube-proxy-5qhvs" is "Ready"
	I1017 20:04:57.780248  453991 pod_ready.go:86] duration metric: took 399.739267ms for pod "kube-proxy-5qhvs" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:57.981117  453991 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:58.379766  453991 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-135652" is "Ready"
	I1017 20:04:58.379795  453991 pod_ready.go:86] duration metric: took 398.650091ms for pod "kube-scheduler-old-k8s-version-135652" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:04:58.379807  453991 pod_ready.go:40] duration metric: took 33.409175647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:04:58.434433  453991 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1017 20:04:58.437748  453991 out.go:203] 
	W1017 20:04:58.440749  453991 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1017 20:04:58.443770  453991 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1017 20:04:58.446769  453991 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-135652" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.650562654Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d3eddd87-ba93-4bc8-b4d2-29fbd31e97be name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.651913977Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=58a50707-2a1e-4c09-a505-4c51a9d83253 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.652965468Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr/dashboard-metrics-scraper" id=cb38142f-de3c-4103-87c8-9852451302fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.653195927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.663264335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.66421182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.6852713Z" level=info msg="Created container 3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr/dashboard-metrics-scraper" id=cb38142f-de3c-4103-87c8-9852451302fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.686135416Z" level=info msg="Starting container: 3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff" id=2f208142-1e36-46a0-a99c-706d3c679160 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.687869826Z" level=info msg="Started container" PID=1632 containerID=3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr/dashboard-metrics-scraper id=2f208142-1e36-46a0-a99c-706d3c679160 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f989dc473ae32fbd8c5406c4fee9929ba69b41ff68c1efa7a93134a7f74e186
	Oct 17 20:04:57 old-k8s-version-135652 conmon[1630]: conmon 3f559ef86315b771de0d <ninfo>: container 1632 exited with status 1
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.836285052Z" level=info msg="Removing container: 53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda" id=06cc7e46-7c5f-4af6-8ef3-025d743de5de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.844741302Z" level=info msg="Error loading conmon cgroup of container 53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda: cgroup deleted" id=06cc7e46-7c5f-4af6-8ef3-025d743de5de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:04:57 old-k8s-version-135652 crio[648]: time="2025-10-17T20:04:57.847762872Z" level=info msg="Removed container 53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr/dashboard-metrics-scraper" id=06cc7e46-7c5f-4af6-8ef3-025d743de5de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.467691615Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.472144095Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.472192438Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.472234513Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.47563012Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.475664236Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.475685552Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.479324057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.479360306Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.479383042Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.482470374Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:05:04 old-k8s-version-135652 crio[648]: time="2025-10-17T20:05:04.482499247Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	3f559ef86315b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   1f989dc473ae3       dashboard-metrics-scraper-5f989dc9cf-f5dzr       kubernetes-dashboard
	e2905732bd31a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   146ec4e8af493       storage-provisioner                              kube-system
	05a6be346e43f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   28 seconds ago      Running             kubernetes-dashboard        0                   56e6e1fa3d201       kubernetes-dashboard-8694d4445c-xwfgw            kubernetes-dashboard
	3cee913666df0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   71499930ff666       coredns-5dd5756b68-74pn6                         kube-system
	61b71350a49b4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   439f31896c045       busybox                                          default
	fa973d114ac94       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   146ec4e8af493       storage-provisioner                              kube-system
	2530bf6fb2cb6       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   bab20b7b0337c       kube-proxy-5qhvs                                 kube-system
	4e2070657cd73       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   575c72f67fe82       kindnet-spvzd                                    kube-system
	bbdce86113a44       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           57 seconds ago      Running             kube-apiserver              1                   90fba20466d45       kube-apiserver-old-k8s-version-135652            kube-system
	04fec30cc87f2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           57 seconds ago      Running             etcd                        1                   d615897286357       etcd-old-k8s-version-135652                      kube-system
	69a0b4952c8c3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           57 seconds ago      Running             kube-scheduler              1                   c8268f48ce56e       kube-scheduler-old-k8s-version-135652            kube-system
	72b4880a4ac31       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           57 seconds ago      Running             kube-controller-manager     1                   3d5848268ef45       kube-controller-manager-old-k8s-version-135652   kube-system
	
	
	==> coredns [3cee913666df08b4596783394b5ea5ef68e091d315d89e582c2c7c642e59ea67] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46846 - 56541 "HINFO IN 8234386990561194953.2354623737495795273. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012352521s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-135652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-135652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=old-k8s-version-135652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_03_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:03:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-135652
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:05:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:04:53 +0000   Fri, 17 Oct 2025 20:03:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:04:53 +0000   Fri, 17 Oct 2025 20:03:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:04:53 +0000   Fri, 17 Oct 2025 20:03:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:04:53 +0000   Fri, 17 Oct 2025 20:03:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-135652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                9cdb3944-7199-44fe-af06-5219f78e8dc9
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-74pn6                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     104s
	  kube-system                 etcd-old-k8s-version-135652                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-spvzd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-135652             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-135652    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-5qhvs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-135652             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f5dzr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-xwfgw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node old-k8s-version-135652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node old-k8s-version-135652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node old-k8s-version-135652 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node old-k8s-version-135652 event: Registered Node old-k8s-version-135652 in Controller
	  Normal  NodeReady                91s                kubelet          Node old-k8s-version-135652 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node old-k8s-version-135652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node old-k8s-version-135652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node old-k8s-version-135652 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-135652 event: Registered Node old-k8s-version-135652 in Controller
	
	
	==> dmesg <==
	[Oct17 19:36] overlayfs: idmapped layers are currently not supported
	[Oct17 19:41] overlayfs: idmapped layers are currently not supported
	[ +34.896999] overlayfs: idmapped layers are currently not supported
	[Oct17 19:42] overlayfs: idmapped layers are currently not supported
	[Oct17 19:43] overlayfs: idmapped layers are currently not supported
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [04fec30cc87f2919128db984312df9b8cd7bdc614707218a4d5892931a729287] <==
	{"level":"info","ts":"2025-10-17T20:04:18.492037Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-17T20:04:18.496723Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T20:04:18.4973Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T20:04:18.496886Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:04:18.497393Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:04:18.497434Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T20:04:18.497523Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T20:04:18.497188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-17T20:04:18.497668Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-17T20:04:18.497785Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:04:18.497838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T20:04:19.652572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-17T20:04:19.652678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-17T20:04:19.652718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-17T20:04:19.652755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-17T20:04:19.652789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-17T20:04:19.652826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-17T20:04:19.652859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-17T20:04:19.654118Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-135652 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T20:04:19.654306Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:04:19.655254Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T20:04:19.664373Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T20:04:19.665469Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-17T20:04:19.665948Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T20:04:19.665998Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:05:15 up  2:47,  0 user,  load average: 1.68, 2.79, 2.53
	Linux old-k8s-version-135652 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e2070657cd73d4d62f63f2797cbc953d5b2ae8ddd88015521bd823860afa9a3] <==
	I1017 20:04:24.306377       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:04:24.306550       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:04:24.306665       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:04:24.306676       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:04:24.306689       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:04:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:04:24.502018       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:04:24.502042       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:04:24.502051       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:04:24.502551       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:04:54.461960       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 20:04:54.502567       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 20:04:54.502760       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:04:54.502913       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1017 20:04:55.902184       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:04:55.902211       1 metrics.go:72] Registering metrics
	I1017 20:04:55.902278       1 controller.go:711] "Syncing nftables rules"
	I1017 20:05:04.467372       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:05:04.467427       1 main.go:301] handling current node
	I1017 20:05:14.466238       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:05:14.466332       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bbdce86113a44ab36a088aa850f2a5cddb392bb495337b9a38ddedc57c767b53] <==
	I1017 20:04:22.928779       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:04:22.957285       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 20:04:22.978352       1 shared_informer.go:318] Caches are synced for configmaps
	I1017 20:04:22.978468       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1017 20:04:22.980327       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1017 20:04:22.980422       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:04:22.986502       1 aggregator.go:166] initial CRD sync complete...
	I1017 20:04:22.986625       1 autoregister_controller.go:141] Starting autoregister controller
	I1017 20:04:22.986657       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:04:22.986689       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:04:22.987931       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1017 20:04:23.006097       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 20:04:23.006233       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1017 20:04:23.013471       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:04:23.603301       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:04:24.654704       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 20:04:24.745714       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 20:04:24.783563       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:04:24.807082       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:04:24.819850       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 20:04:24.878006       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.67.242"}
	I1017 20:04:24.906170       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.166.185"}
	I1017 20:04:35.954555       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 20:04:36.255670       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1017 20:04:36.469113       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [72b4880a4ac31a561331fc9731e3f6e0e2d06b3829e4c1ee82b157b2fe66d636] <==
	I1017 20:04:36.317126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="370.451719ms"
	I1017 20:04:36.317604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.556µs"
	I1017 20:04:36.323197       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-xwfgw"
	I1017 20:04:36.323332       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-f5dzr"
	I1017 20:04:36.344215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.938048ms"
	I1017 20:04:36.350968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="90.324628ms"
	I1017 20:04:36.357375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.112475ms"
	I1017 20:04:36.357973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="69.339µs"
	I1017 20:04:36.387037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.960624ms"
	I1017 20:04:36.387233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.847µs"
	I1017 20:04:36.399435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.882µs"
	I1017 20:04:36.480073       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1017 20:04:36.480251       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1017 20:04:36.503840       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:04:36.544136       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 20:04:36.544167       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 20:04:41.791816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="176.758µs"
	I1017 20:04:42.796718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.683µs"
	I1017 20:04:46.661540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.023µs"
	I1017 20:04:47.820600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.947327ms"
	I1017 20:04:47.821850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="36.906µs"
	I1017 20:04:56.683900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.195197ms"
	I1017 20:04:56.684004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.714µs"
	I1017 20:04:57.842116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.603µs"
	I1017 20:05:06.647587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.967µs"
	
	
	==> kube-proxy [2530bf6fb2cb6db3309ea4398f8a1439523777523161e863d0aff28c3cfb7f45] <==
	I1017 20:04:24.483647       1 server_others.go:69] "Using iptables proxy"
	I1017 20:04:24.498529       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1017 20:04:24.636456       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:04:24.648650       1 server_others.go:152] "Using iptables Proxier"
	I1017 20:04:24.648685       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 20:04:24.648696       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 20:04:24.648718       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 20:04:24.648916       1 server.go:846] "Version info" version="v1.28.0"
	I1017 20:04:24.648926       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:04:24.650110       1 config.go:188] "Starting service config controller"
	I1017 20:04:24.650120       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 20:04:24.650136       1 config.go:97] "Starting endpoint slice config controller"
	I1017 20:04:24.650139       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 20:04:24.657636       1 config.go:315] "Starting node config controller"
	I1017 20:04:24.664804       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 20:04:24.751098       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1017 20:04:24.751229       1 shared_informer.go:318] Caches are synced for service config
	I1017 20:04:24.765197       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [69a0b4952c8c39c59af5f7438198d6b6fe4e7cb0a49809e1f434fa02cf6b54db] <==
	I1017 20:04:22.909448       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1017 20:04:22.912619       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1017 20:04:22.916757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1017 20:04:22.916876       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1017 20:04:22.927732       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1017 20:04:22.927775       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:04:22.931903       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1017 20:04:22.931977       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1017 20:04:22.932060       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1017 20:04:22.932080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1017 20:04:22.932165       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1017 20:04:22.932181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1017 20:04:22.932244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1017 20:04:22.932259       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1017 20:04:22.932314       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1017 20:04:22.932328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1017 20:04:22.932397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1017 20:04:22.932411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1017 20:04:22.932466       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1017 20:04:22.932481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1017 20:04:22.932546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1017 20:04:22.932562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1017 20:04:22.933094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1017 20:04:22.933173       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1017 20:04:24.209866       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: I1017 20:04:36.340942     768 topology_manager.go:215] "Topology Admit Handler" podUID="cc2416cb-c5d5-48c6-870f-828b378c0b23" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-xwfgw"
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: I1017 20:04:36.403816     768 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnckj\" (UniqueName: \"kubernetes.io/projected/ba9a7e68-9a98-4197-b4d2-4e7e495d58ae-kube-api-access-rnckj\") pod \"dashboard-metrics-scraper-5f989dc9cf-f5dzr\" (UID: \"ba9a7e68-9a98-4197-b4d2-4e7e495d58ae\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr"
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: I1017 20:04:36.404094     768 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlw65\" (UniqueName: \"kubernetes.io/projected/cc2416cb-c5d5-48c6-870f-828b378c0b23-kube-api-access-qlw65\") pod \"kubernetes-dashboard-8694d4445c-xwfgw\" (UID: \"cc2416cb-c5d5-48c6-870f-828b378c0b23\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xwfgw"
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: I1017 20:04:36.404220     768 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ba9a7e68-9a98-4197-b4d2-4e7e495d58ae-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-f5dzr\" (UID: \"ba9a7e68-9a98-4197-b4d2-4e7e495d58ae\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr"
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: I1017 20:04:36.404343     768 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cc2416cb-c5d5-48c6-870f-828b378c0b23-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-xwfgw\" (UID: \"cc2416cb-c5d5-48c6-870f-828b378c0b23\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xwfgw"
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: W1017 20:04:36.665197     768 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/crio-1f989dc473ae32fbd8c5406c4fee9929ba69b41ff68c1efa7a93134a7f74e186 WatchSource:0}: Error finding container 1f989dc473ae32fbd8c5406c4fee9929ba69b41ff68c1efa7a93134a7f74e186: Status 404 returned error can't find the container with id 1f989dc473ae32fbd8c5406c4fee9929ba69b41ff68c1efa7a93134a7f74e186
	Oct 17 20:04:36 old-k8s-version-135652 kubelet[768]: W1017 20:04:36.681406     768 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/b175bb475b3fc10c26a00362a2c7ab6c9f25d2c0ff71db333b2dde6548bc4f86/crio-56e6e1fa3d20198e21fff890cad23fef0bf8f1daaa5f26f7af72b2dc8bb01084 WatchSource:0}: Error finding container 56e6e1fa3d20198e21fff890cad23fef0bf8f1daaa5f26f7af72b2dc8bb01084: Status 404 returned error can't find the container with id 56e6e1fa3d20198e21fff890cad23fef0bf8f1daaa5f26f7af72b2dc8bb01084
	Oct 17 20:04:41 old-k8s-version-135652 kubelet[768]: I1017 20:04:41.775862     768 scope.go:117] "RemoveContainer" containerID="fc234a63e6b0e6253e1670b8dad649256d1ad3150fa75fa8870814b817c88aee"
	Oct 17 20:04:42 old-k8s-version-135652 kubelet[768]: I1017 20:04:42.780508     768 scope.go:117] "RemoveContainer" containerID="fc234a63e6b0e6253e1670b8dad649256d1ad3150fa75fa8870814b817c88aee"
	Oct 17 20:04:42 old-k8s-version-135652 kubelet[768]: I1017 20:04:42.780783     768 scope.go:117] "RemoveContainer" containerID="53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda"
	Oct 17 20:04:42 old-k8s-version-135652 kubelet[768]: E1017 20:04:42.781065     768 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f5dzr_kubernetes-dashboard(ba9a7e68-9a98-4197-b4d2-4e7e495d58ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr" podUID="ba9a7e68-9a98-4197-b4d2-4e7e495d58ae"
	Oct 17 20:04:46 old-k8s-version-135652 kubelet[768]: I1017 20:04:46.633940     768 scope.go:117] "RemoveContainer" containerID="53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda"
	Oct 17 20:04:46 old-k8s-version-135652 kubelet[768]: E1017 20:04:46.634272     768 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f5dzr_kubernetes-dashboard(ba9a7e68-9a98-4197-b4d2-4e7e495d58ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr" podUID="ba9a7e68-9a98-4197-b4d2-4e7e495d58ae"
	Oct 17 20:04:47 old-k8s-version-135652 kubelet[768]: I1017 20:04:47.809638     768 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xwfgw" podStartSLOduration=1.4919803489999999 podCreationTimestamp="2025-10-17 20:04:36 +0000 UTC" firstStartedPulling="2025-10-17 20:04:36.685576652 +0000 UTC m=+19.232972064" lastFinishedPulling="2025-10-17 20:04:47.003168686 +0000 UTC m=+29.550564106" observedRunningTime="2025-10-17 20:04:47.80952614 +0000 UTC m=+30.356921552" watchObservedRunningTime="2025-10-17 20:04:47.809572391 +0000 UTC m=+30.356967795"
	Oct 17 20:04:54 old-k8s-version-135652 kubelet[768]: I1017 20:04:54.814104     768 scope.go:117] "RemoveContainer" containerID="fa973d114ac945d1a893e6ca7e8c2be9fdc00ee2b43156c1e95432093ff9c4d7"
	Oct 17 20:04:57 old-k8s-version-135652 kubelet[768]: I1017 20:04:57.649557     768 scope.go:117] "RemoveContainer" containerID="53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda"
	Oct 17 20:04:57 old-k8s-version-135652 kubelet[768]: I1017 20:04:57.824996     768 scope.go:117] "RemoveContainer" containerID="53d961ddbdae2cbd1275c9dd7fc9f4c54be03f325546eb8ec9baa9c22fcd0cda"
	Oct 17 20:04:57 old-k8s-version-135652 kubelet[768]: I1017 20:04:57.825269     768 scope.go:117] "RemoveContainer" containerID="3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff"
	Oct 17 20:04:57 old-k8s-version-135652 kubelet[768]: E1017 20:04:57.825531     768 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f5dzr_kubernetes-dashboard(ba9a7e68-9a98-4197-b4d2-4e7e495d58ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr" podUID="ba9a7e68-9a98-4197-b4d2-4e7e495d58ae"
	Oct 17 20:05:06 old-k8s-version-135652 kubelet[768]: I1017 20:05:06.634204     768 scope.go:117] "RemoveContainer" containerID="3f559ef86315b771de0dfbdb515dd71e17f13eafb0a40f4dc619305b0767aeff"
	Oct 17 20:05:06 old-k8s-version-135652 kubelet[768]: E1017 20:05:06.634505     768 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f5dzr_kubernetes-dashboard(ba9a7e68-9a98-4197-b4d2-4e7e495d58ae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f5dzr" podUID="ba9a7e68-9a98-4197-b4d2-4e7e495d58ae"
	Oct 17 20:05:10 old-k8s-version-135652 kubelet[768]: I1017 20:05:10.747902     768 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 17 20:05:10 old-k8s-version-135652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:05:10 old-k8s-version-135652 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:05:10 old-k8s-version-135652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [05a6be346e43f4a055341332d969f75e6f690027eef3065dcf733b4b45ebb9bf] <==
	2025/10/17 20:04:47 Using namespace: kubernetes-dashboard
	2025/10/17 20:04:47 Using in-cluster config to connect to apiserver
	2025/10/17 20:04:47 Using secret token for csrf signing
	2025/10/17 20:04:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:04:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:04:47 Successful initial request to the apiserver, version: v1.28.0
	2025/10/17 20:04:47 Generating JWE encryption key
	2025/10/17 20:04:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:04:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:04:47 Initializing JWE encryption key from synchronized object
	2025/10/17 20:04:47 Creating in-cluster Sidecar client
	2025/10/17 20:04:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:04:47 Serving insecurely on HTTP port: 9090
	2025/10/17 20:04:47 Starting overwatch
	
	
	==> storage-provisioner [e2905732bd31a768f3a5cbf8925e8ba87524f0e93f091c5ef5c4eff9b2bbfea1] <==
	I1017 20:04:54.866333       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:04:54.878633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:04:54.878773       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1017 20:05:12.282657       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:05:12.284630       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-135652_6a5a4fc4-f618-4440-b940-9b44c7d2b495!
	I1017 20:05:12.290321       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3ebb79cd-89e4-4fbf-baf1-fb4d250e17dc", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-135652_6a5a4fc4-f618-4440-b940-9b44c7d2b495 became leader
	I1017 20:05:12.385460       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-135652_6a5a4fc4-f618-4440-b940-9b44c7d2b495!
	
	
	==> storage-provisioner [fa973d114ac945d1a893e6ca7e8c2be9fdc00ee2b43156c1e95432093ff9c4d7] <==
	I1017 20:04:24.287917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:04:54.293843       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-135652 -n old-k8s-version-135652
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-135652 -n old-k8s-version-135652: exit status 2 (366.650952ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-135652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (287.591285ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:06:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-413711 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-413711 describe deploy/metrics-server -n kube-system: exit status 1 (78.585903ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-413711 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-413711
helpers_test.go:243: (dbg) docker inspect no-preload-413711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892",
	        "Created": "2025-10-17T20:05:21.029855804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 458071,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:05:21.121566767Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/hosts",
	        "LogPath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892-json.log",
	        "Name": "/no-preload-413711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-413711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-413711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892",
	                "LowerDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-413711",
	                "Source": "/var/lib/docker/volumes/no-preload-413711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-413711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-413711",
	                "name.minikube.sigs.k8s.io": "no-preload-413711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2aa0b34ab76bdd4a25d34d4a4a29d445394470331fd8f22b4b708eeca203a81b",
	            "SandboxKey": "/var/run/docker/netns/2aa0b34ab76b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-413711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:e7:54:01:f3:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a5bca7265808c00f6c846a52d60c76f955a6009c9954a0d43b577117c15f43c",
	                    "EndpointID": "b4bc2ac62f39d8ffa4331eac31d7ac14ed191cb027667056d0a093083e6ef203",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-413711",
	                        "b7258d1208d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-413711 -n no-preload-413711
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-413711 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-413711 logs -n 25: (1.267055413s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-804622 sudo crio config                                                                                                                                                                                                             │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ delete  │ -p cilium-804622                                                                                                                                                                                                                              │ cilium-804622             │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ start   │ -p force-systemd-env-945733 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-945733  │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ force-systemd-flag-285387 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-285387 │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ delete  │ -p force-systemd-flag-285387                                                                                                                                                                                                                  │ force-systemd-flag-285387 │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164379    │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p force-systemd-env-945733                                                                                                                                                                                                                   │ force-systemd-env-945733  │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p cert-options-533238 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ cert-options-533238 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ -p cert-options-533238 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p cert-options-533238                                                                                                                                                                                                                        │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-135652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │                     │
	│ stop    │ -p old-k8s-version-135652 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │ 17 Oct 25 20:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-135652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ image   │ old-k8s-version-135652 image list --format=json                                                                                                                                                                                               │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ pause   │ -p old-k8s-version-135652 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │                     │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164379    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711         │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:06 UTC │
	│ delete  │ -p cert-expiration-164379                                                                                                                                                                                                                     │ cert-expiration-164379    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724        │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-413711         │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:05:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:05:42.493147  461068 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:05:42.493682  461068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:05:42.493710  461068 out.go:374] Setting ErrFile to fd 2...
	I1017 20:05:42.493733  461068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:05:42.494090  461068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:05:42.494555  461068 out.go:368] Setting JSON to false
	I1017 20:05:42.495504  461068 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10093,"bootTime":1760721449,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:05:42.495592  461068 start.go:141] virtualization:  
	I1017 20:05:42.500879  461068 out.go:179] * [embed-certs-572724] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:05:42.504016  461068 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:05:42.504080  461068 notify.go:220] Checking for updates...
	I1017 20:05:42.510076  461068 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:05:42.513087  461068 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:05:42.516782  461068 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:05:42.519797  461068 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:05:42.522785  461068 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:05:42.526339  461068 config.go:182] Loaded profile config "no-preload-413711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:05:42.526512  461068 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:05:42.554176  461068 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:05:42.554318  461068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:05:42.667726  461068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-17 20:05:42.657366188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:05:42.667851  461068 docker.go:318] overlay module found
	I1017 20:05:42.671176  461068 out.go:179] * Using the docker driver based on user configuration
	I1017 20:05:42.674049  461068 start.go:305] selected driver: docker
	I1017 20:05:42.674068  461068 start.go:925] validating driver "docker" against <nil>
	I1017 20:05:42.674097  461068 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:05:42.674842  461068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:05:42.756206  461068 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-17 20:05:42.74635012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:05:42.756379  461068 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:05:42.756636  461068 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:05:42.759483  461068 out.go:179] * Using Docker driver with root privileges
	I1017 20:05:42.762341  461068 cni.go:84] Creating CNI manager for ""
	I1017 20:05:42.762411  461068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:05:42.762423  461068 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:05:42.762515  461068 start.go:349] cluster config:
	{Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:05:42.767556  461068 out.go:179] * Starting "embed-certs-572724" primary control-plane node in "embed-certs-572724" cluster
	I1017 20:05:42.770449  461068 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:05:42.773343  461068 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:05:42.776153  461068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:05:42.776240  461068 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:05:42.776251  461068 cache.go:58] Caching tarball of preloaded images
	I1017 20:05:42.776341  461068 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:05:42.776351  461068 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:05:42.776466  461068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/config.json ...
	I1017 20:05:42.776483  461068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/config.json: {Name:mk763f29642314ca254adbaa774520024095ea6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:42.776607  461068 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:05:42.813355  461068 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:05:42.813392  461068 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:05:42.813407  461068 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:05:42.813430  461068 start.go:360] acquireMachinesLock for embed-certs-572724: {Name:mkd392efc9f089fa6f99fda7caa0023fa20afc6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:05:42.813539  461068 start.go:364] duration metric: took 88.342µs to acquireMachinesLock for "embed-certs-572724"
	I1017 20:05:42.813570  461068 start.go:93] Provisioning new machine with config: &{Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:05:42.813642  461068 start.go:125] createHost starting for "" (driver="docker")
	I1017 20:05:41.504779  457767 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.283383873s)
	I1017 20:05:41.504803  457767 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1017 20:05:41.504822  457767 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1017 20:05:41.504871  457767 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1017 20:05:42.285344  457767 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1017 20:05:42.285384  457767 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1017 20:05:42.285443  457767 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1017 20:05:44.272165  457767 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.986700253s)
	I1017 20:05:44.272192  457767 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1017 20:05:44.272210  457767 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1017 20:05:44.272253  457767 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1017 20:05:42.816966  461068 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:05:42.817195  461068 start.go:159] libmachine.API.Create for "embed-certs-572724" (driver="docker")
	I1017 20:05:42.817240  461068 client.go:168] LocalClient.Create starting
	I1017 20:05:42.817335  461068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem
	I1017 20:05:42.817374  461068 main.go:141] libmachine: Decoding PEM data...
	I1017 20:05:42.817392  461068 main.go:141] libmachine: Parsing certificate...
	I1017 20:05:42.817451  461068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem
	I1017 20:05:42.817478  461068 main.go:141] libmachine: Decoding PEM data...
	I1017 20:05:42.817493  461068 main.go:141] libmachine: Parsing certificate...
	I1017 20:05:42.817905  461068 cli_runner.go:164] Run: docker network inspect embed-certs-572724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:05:42.836661  461068 cli_runner.go:211] docker network inspect embed-certs-572724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:05:42.836746  461068 network_create.go:284] running [docker network inspect embed-certs-572724] to gather additional debugging logs...
	I1017 20:05:42.836771  461068 cli_runner.go:164] Run: docker network inspect embed-certs-572724
	W1017 20:05:42.868628  461068 cli_runner.go:211] docker network inspect embed-certs-572724 returned with exit code 1
	I1017 20:05:42.868657  461068 network_create.go:287] error running [docker network inspect embed-certs-572724]: docker network inspect embed-certs-572724: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-572724 not found
	I1017 20:05:42.868670  461068 network_create.go:289] output of [docker network inspect embed-certs-572724]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-572724 not found
	
	** /stderr **
	I1017 20:05:42.868776  461068 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:05:42.885906  461068 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f667d9c3ea2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:fc:1d:c6:d2:da} reservation:<nil>}
	I1017 20:05:42.886161  461068 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-82a22734829b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:5a:78:c5:e0:0a} reservation:<nil>}
	I1017 20:05:42.886488  461068 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b88bd3b523f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:75:74:cd:15:9b} reservation:<nil>}
	I1017 20:05:42.886778  461068 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7a5bca726580 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:8f:4d:c7:4c:4d} reservation:<nil>}
	I1017 20:05:42.887151  461068 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e2e60}
	I1017 20:05:42.887168  461068 network_create.go:124] attempt to create docker network embed-certs-572724 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1017 20:05:42.887224  461068 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-572724 embed-certs-572724
	I1017 20:05:42.977746  461068 network_create.go:108] docker network embed-certs-572724 192.168.85.0/24 created
	I1017 20:05:42.977774  461068 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-572724" container
	I1017 20:05:42.977851  461068 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:05:43.008949  461068 cli_runner.go:164] Run: docker volume create embed-certs-572724 --label name.minikube.sigs.k8s.io=embed-certs-572724 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:05:43.038601  461068 oci.go:103] Successfully created a docker volume embed-certs-572724
	I1017 20:05:43.038678  461068 cli_runner.go:164] Run: docker run --rm --name embed-certs-572724-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-572724 --entrypoint /usr/bin/test -v embed-certs-572724:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:05:43.941364  461068 oci.go:107] Successfully prepared a docker volume embed-certs-572724
	I1017 20:05:43.941400  461068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:05:43.941420  461068 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:05:43.941482  461068 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-572724:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 20:05:46.134398  457767 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.862119092s)
	I1017 20:05:46.134427  457767 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1017 20:05:46.134444  457767 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1017 20:05:46.134492  457767 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1017 20:05:49.491109  461068 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-572724:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.549592238s)
	I1017 20:05:49.491146  461068 kic.go:203] duration metric: took 5.549722426s to extract preloaded images to volume ...
	W1017 20:05:49.491275  461068 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 20:05:49.491374  461068 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:05:49.589435  461068 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-572724 --name embed-certs-572724 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-572724 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-572724 --network embed-certs-572724 --ip 192.168.85.2 --volume embed-certs-572724:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:05:50.994741  461068 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-572724 --name embed-certs-572724 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-572724 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-572724 --network embed-certs-572724 --ip 192.168.85.2 --volume embed-certs-572724:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6: (1.405253528s)
	I1017 20:05:50.994820  461068 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Running}}
	I1017 20:05:51.078186  461068 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:05:51.123262  461068 cli_runner.go:164] Run: docker exec embed-certs-572724 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:05:51.232420  461068 oci.go:144] the created container "embed-certs-572724" has a running status.
	I1017 20:05:51.232457  461068 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa...
	I1017 20:05:50.594661  457767 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.460142123s)
	I1017 20:05:50.594685  457767 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1017 20:05:50.594704  457767 cache_images.go:124] Successfully loaded all cached images
	I1017 20:05:50.594710  457767 cache_images.go:93] duration metric: took 18.833893703s to LoadCachedImages
	I1017 20:05:50.594718  457767 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 20:05:50.594803  457767 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-413711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-413711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:05:50.594881  457767 ssh_runner.go:195] Run: crio config
	I1017 20:05:50.702328  457767 cni.go:84] Creating CNI manager for ""
	I1017 20:05:50.702405  457767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:05:50.702438  457767 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:05:50.702489  457767 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-413711 NodeName:no-preload-413711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:05:50.702663  457767 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-413711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:05:50.702764  457767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:05:50.731753  457767 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1017 20:05:50.731875  457767 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1017 20:05:50.739554  457767 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1017 20:05:50.739651  457767 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1017 20:05:50.740309  457767 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1017 20:05:50.740314  457767 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1017 20:05:50.744466  457767 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1017 20:05:50.744499  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1017 20:05:51.875090  457767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:05:51.905026  457767 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1017 20:05:51.909219  457767 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1017 20:05:51.909257  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1017 20:05:51.957390  457767 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1017 20:05:51.979291  457767 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1017 20:05:51.979325  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1017 20:05:52.818431  457767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:05:52.831072  457767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 20:05:52.856675  457767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:05:52.895871  457767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1017 20:05:52.928821  457767 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:05:52.933805  457767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:05:52.954596  457767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:05:53.155241  457767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:05:53.196855  457767 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711 for IP: 192.168.76.2
	I1017 20:05:53.196873  457767 certs.go:195] generating shared ca certs ...
	I1017 20:05:53.196888  457767 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:53.197019  457767 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:05:53.197058  457767 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:05:53.197064  457767 certs.go:257] generating profile certs ...
	I1017 20:05:53.197118  457767 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.key
	I1017 20:05:53.197128  457767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.crt with IP's: []
	I1017 20:05:55.046002  457767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.crt ...
	I1017 20:05:55.046083  457767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.crt: {Name:mk0b2b237e0379020885e22ad8f7629cc1cb9505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:55.046328  457767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.key ...
	I1017 20:05:55.046367  457767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.key: {Name:mkffdbf06c054c2d4161ac4ffbd21a008e5443c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:55.046520  457767 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.key.420d8401
	I1017 20:05:55.046560  457767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.crt.420d8401 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1017 20:05:55.230810  457767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.crt.420d8401 ...
	I1017 20:05:55.230842  457767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.crt.420d8401: {Name:mkc8ef76418c7031901c37da828810b802a1ec5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:55.231025  457767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.key.420d8401 ...
	I1017 20:05:55.231040  457767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.key.420d8401: {Name:mka1227ed8a0c2c1a7fcd3b5bc5ec786f491f2f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:55.231126  457767 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.crt.420d8401 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.crt
	I1017 20:05:55.231210  457767 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.key.420d8401 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.key
	I1017 20:05:55.231272  457767 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.key
	I1017 20:05:55.231291  457767 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.crt with IP's: []
	I1017 20:05:55.691181  457767 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.crt ...
	I1017 20:05:55.691266  457767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.crt: {Name:mk3a403cc427554721a304f951e71db9967ba2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:55.691534  457767 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.key ...
	I1017 20:05:55.691572  457767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.key: {Name:mkadd35763b938940ffbc99ea87295893f07bcef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:55.691845  457767 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:05:55.691916  457767 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:05:55.691941  457767 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:05:55.692003  457767 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:05:55.692071  457767 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:05:55.692120  457767 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:05:55.692220  457767 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:05:55.692949  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:05:55.709692  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:05:55.730430  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:05:55.747305  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:05:55.765235  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 20:05:55.784039  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:05:55.803339  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:05:55.822591  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:05:55.843287  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:05:55.863999  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:05:55.884387  457767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:05:55.904077  457767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:05:55.918354  457767 ssh_runner.go:195] Run: openssl version
	I1017 20:05:55.924909  457767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:05:55.933952  457767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:05:55.938069  457767 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:05:55.938132  457767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:05:55.986615  457767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:05:55.995852  457767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:05:56.006214  457767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:05:56.011161  457767 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:05:56.011230  457767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:05:56.056023  457767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:05:56.065581  457767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:05:56.075234  457767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:05:56.079723  457767 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:05:56.079787  457767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:05:56.123019  457767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:05:56.132126  457767 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:05:56.136681  457767 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:05:56.136744  457767 kubeadm.go:400] StartCluster: {Name:no-preload-413711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-413711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:05:56.136831  457767 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:05:56.136898  457767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:05:56.174793  457767 cri.go:89] found id: ""
	I1017 20:05:56.174873  457767 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:05:56.185732  457767 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:05:56.194664  457767 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:05:56.194731  457767 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:05:56.206025  457767 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:05:56.206046  457767 kubeadm.go:157] found existing configuration files:
	
	I1017 20:05:56.206101  457767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:05:56.215438  457767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:05:56.215503  457767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:05:56.223675  457767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:05:56.233140  457767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:05:56.233212  457767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:05:56.241601  457767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:05:56.251127  457767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:05:56.251197  457767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:05:56.259995  457767 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:05:56.269509  457767 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:05:56.269573  457767 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:05:56.278450  457767 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:05:56.372442  457767 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:05:56.372843  457767 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:05:56.404847  457767 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:05:56.404943  457767 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 20:05:56.404986  457767 kubeadm.go:318] OS: Linux
	I1017 20:05:56.405036  457767 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:05:56.405091  457767 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 20:05:56.405150  457767 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:05:56.405208  457767 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:05:56.405263  457767 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:05:56.405321  457767 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:05:56.405374  457767 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:05:56.405435  457767 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:05:56.405487  457767 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 20:05:56.481076  457767 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:05:56.481194  457767 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:05:56.481297  457767 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:05:56.506375  457767 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 20:05:54.754781  461068 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:05:54.780240  461068 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:05:54.806132  461068 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:05:54.806151  461068 kic_runner.go:114] Args: [docker exec --privileged embed-certs-572724 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:05:54.870366  461068 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:05:54.889649  461068 machine.go:93] provisionDockerMachine start ...
	I1017 20:05:54.889737  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:05:54.916705  461068 main.go:141] libmachine: Using SSH client type: native
	I1017 20:05:54.917037  461068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I1017 20:05:54.917047  461068 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:05:55.077028  461068 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-572724
	
	I1017 20:05:55.077058  461068 ubuntu.go:182] provisioning hostname "embed-certs-572724"
	I1017 20:05:55.077122  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:05:55.100567  461068 main.go:141] libmachine: Using SSH client type: native
	I1017 20:05:55.100886  461068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I1017 20:05:55.100901  461068 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-572724 && echo "embed-certs-572724" | sudo tee /etc/hostname
	I1017 20:05:55.267493  461068 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-572724
	
	I1017 20:05:55.267566  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:05:55.290722  461068 main.go:141] libmachine: Using SSH client type: native
	I1017 20:05:55.291029  461068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I1017 20:05:55.291052  461068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-572724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-572724/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-572724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:05:55.444782  461068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:05:55.444810  461068 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:05:55.444838  461068 ubuntu.go:190] setting up certificates
	I1017 20:05:55.444857  461068 provision.go:84] configureAuth start
	I1017 20:05:55.444918  461068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-572724
	I1017 20:05:55.466633  461068 provision.go:143] copyHostCerts
	I1017 20:05:55.466701  461068 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:05:55.466715  461068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:05:55.466791  461068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:05:55.466900  461068 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:05:55.466912  461068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:05:55.466941  461068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:05:55.467006  461068 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:05:55.467016  461068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:05:55.467042  461068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:05:55.467104  461068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.embed-certs-572724 san=[127.0.0.1 192.168.85.2 embed-certs-572724 localhost minikube]
	I1017 20:05:56.336889  461068 provision.go:177] copyRemoteCerts
	I1017 20:05:56.337043  461068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:05:56.337112  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:05:56.363655  461068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:05:56.468922  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:05:56.486639  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:05:56.513338  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:05:56.539004  461068 provision.go:87] duration metric: took 1.094118892s to configureAuth
	I1017 20:05:56.539087  461068 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:05:56.539327  461068 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:05:56.539483  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:05:56.567243  461068 main.go:141] libmachine: Using SSH client type: native
	I1017 20:05:56.567546  461068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I1017 20:05:56.567561  461068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:05:56.849483  461068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:05:56.849515  461068 machine.go:96] duration metric: took 1.959848875s to provisionDockerMachine
	I1017 20:05:56.849553  461068 client.go:171] duration metric: took 14.03227465s to LocalClient.Create
	I1017 20:05:56.849575  461068 start.go:167] duration metric: took 14.032380427s to libmachine.API.Create "embed-certs-572724"
	I1017 20:05:56.849584  461068 start.go:293] postStartSetup for "embed-certs-572724" (driver="docker")
	I1017 20:05:56.849594  461068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:05:56.849669  461068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:05:56.849729  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:05:56.867710  461068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:05:56.973264  461068 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:05:56.977042  461068 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:05:56.977073  461068 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:05:56.977084  461068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:05:56.977144  461068 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:05:56.977234  461068 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:05:56.977344  461068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:05:56.985518  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:05:57.005736  461068 start.go:296] duration metric: took 156.119788ms for postStartSetup
	I1017 20:05:57.006135  461068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-572724
	I1017 20:05:57.030145  461068 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/config.json ...
	I1017 20:05:57.030437  461068 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:05:57.030497  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:05:57.054386  461068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:05:57.162214  461068 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:05:57.167253  461068 start.go:128] duration metric: took 14.35359469s to createHost
	I1017 20:05:57.167279  461068 start.go:83] releasing machines lock for "embed-certs-572724", held for 14.353725673s
	I1017 20:05:57.167353  461068 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-572724
	I1017 20:05:57.186489  461068 ssh_runner.go:195] Run: cat /version.json
	I1017 20:05:57.186543  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:05:57.186550  461068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:05:57.186617  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:05:57.220041  461068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:05:57.229632  461068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:05:57.336917  461068 ssh_runner.go:195] Run: systemctl --version
	I1017 20:05:57.439627  461068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:05:57.480021  461068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:05:57.484911  461068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:05:57.485025  461068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:05:57.518537  461068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 20:05:57.518619  461068 start.go:495] detecting cgroup driver to use...
	I1017 20:05:57.518669  461068 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:05:57.518765  461068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:05:57.537691  461068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:05:57.552251  461068 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:05:57.552335  461068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:05:57.571319  461068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:05:57.590654  461068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:05:57.735354  461068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:05:57.921755  461068 docker.go:234] disabling docker service ...
	I1017 20:05:57.921849  461068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:05:57.947159  461068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:05:57.961548  461068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:05:58.112274  461068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:05:58.263864  461068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:05:58.278851  461068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:05:58.293597  461068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:05:58.293737  461068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:05:58.302695  461068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:05:58.302802  461068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:05:58.311460  461068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:05:58.319902  461068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:05:58.329195  461068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:05:58.337729  461068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:05:58.346798  461068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:05:58.360609  461068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:05:58.370034  461068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:05:58.378193  461068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:05:58.386264  461068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:05:58.553249  461068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:05:58.719360  461068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:05:58.719490  461068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:05:58.724100  461068 start.go:563] Will wait 60s for crictl version
	I1017 20:05:58.724261  461068 ssh_runner.go:195] Run: which crictl
	I1017 20:05:58.728471  461068 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:05:58.764832  461068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:05:58.764981  461068 ssh_runner.go:195] Run: crio --version
	I1017 20:05:58.802931  461068 ssh_runner.go:195] Run: crio --version
	I1017 20:05:58.841186  461068 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:05:56.513496  457767 out.go:252]   - Generating certificates and keys ...
	I1017 20:05:56.513588  457767 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:05:56.513666  457767 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:05:57.205103  457767 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:05:57.283489  457767 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:05:57.729553  457767 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:05:57.971621  457767 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:05:58.977359  457767 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:05:58.977915  457767 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-413711] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 20:05:59.561118  457767 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:05:59.561693  457767 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-413711] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 20:05:58.843882  461068 cli_runner.go:164] Run: docker network inspect embed-certs-572724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:05:58.860292  461068 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:05:58.864434  461068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:05:58.874445  461068 kubeadm.go:883] updating cluster {Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:05:58.874566  461068 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:05:58.874628  461068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:05:58.912090  461068 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:05:58.912117  461068 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:05:58.912173  461068 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:05:58.942932  461068 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:05:58.942958  461068 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:05:58.942966  461068 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 20:05:58.943099  461068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-572724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:05:58.943208  461068 ssh_runner.go:195] Run: crio config
	I1017 20:05:59.005282  461068 cni.go:84] Creating CNI manager for ""
	I1017 20:05:59.005308  461068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:05:59.005345  461068 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:05:59.005372  461068 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-572724 NodeName:embed-certs-572724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:05:59.005539  461068 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-572724"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:05:59.005627  461068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:05:59.017609  461068 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:05:59.017709  461068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:05:59.028875  461068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 20:05:59.042627  461068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:05:59.061006  461068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1017 20:05:59.077519  461068 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:05:59.082044  461068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:05:59.093402  461068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:05:59.238337  461068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:05:59.256985  461068 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724 for IP: 192.168.85.2
	I1017 20:05:59.257073  461068 certs.go:195] generating shared ca certs ...
	I1017 20:05:59.257106  461068 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:59.257318  461068 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:05:59.257404  461068 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:05:59.257427  461068 certs.go:257] generating profile certs ...
	I1017 20:05:59.257520  461068 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/client.key
	I1017 20:05:59.257575  461068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/client.crt with IP's: []
	I1017 20:05:59.896157  461068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/client.crt ...
	I1017 20:05:59.896187  461068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/client.crt: {Name:mk895c097528b7eb2ce06448d8609a5c2149f8ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:59.896387  461068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/client.key ...
	I1017 20:05:59.896400  461068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/client.key: {Name:mk60e1ba438d622fde6aaaa65e25fc15b7c3d7fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:05:59.896490  461068 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.key.5b851251
	I1017 20:05:59.896512  461068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.crt.5b851251 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 20:06:00.348379  461068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.crt.5b851251 ...
	I1017 20:06:00.348473  461068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.crt.5b851251: {Name:mke844fdb2f57366fdd108ab0c736da257c63e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:06:00.348781  461068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.key.5b851251 ...
	I1017 20:06:00.348831  461068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.key.5b851251: {Name:mkaef4f7c1076e12d38bcfc384f0c54e9f4ad734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:06:00.349016  461068 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.crt.5b851251 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.crt
	I1017 20:06:00.349151  461068 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.key.5b851251 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.key
	I1017 20:06:00.349270  461068 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.key
	I1017 20:06:00.349326  461068 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.crt with IP's: []
	I1017 20:06:00.528203  461068 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.crt ...
	I1017 20:06:00.528291  461068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.crt: {Name:mk07f4a219031acebafbdf1a59ea4a5de28fae97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:06:00.528564  461068 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.key ...
	I1017 20:06:00.528613  461068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.key: {Name:mk868a58174ee06ce97bacfb0ea41c93f0acd63a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:06:00.528904  461068 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:06:00.528997  461068 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:06:00.529024  461068 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:06:00.529081  461068 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:06:00.529129  461068 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:06:00.529185  461068 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:06:00.529261  461068 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:06:00.529952  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:06:00.551472  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:06:00.573639  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:06:00.592204  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:06:00.611859  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 20:06:00.631870  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:06:00.652178  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:06:00.671940  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:06:00.691322  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:06:00.710660  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:06:00.730711  461068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:06:00.751272  461068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:06:00.765369  461068 ssh_runner.go:195] Run: openssl version
	I1017 20:06:00.772084  461068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:06:00.781284  461068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:06:00.785399  461068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:06:00.785497  461068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:06:00.827179  461068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:06:00.836285  461068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:06:00.845136  461068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:06:00.849102  461068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:06:00.849197  461068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:06:00.893045  461068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:06:00.902073  461068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:06:00.910801  461068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:06:00.914808  461068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:06:00.914905  461068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:06:00.957412  461068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:06:00.966676  461068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:06:00.971057  461068 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:06:00.971150  461068 kubeadm.go:400] StartCluster: {Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:06:00.971241  461068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:06:00.971319  461068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:06:01.007088  461068 cri.go:89] found id: ""
	I1017 20:06:01.007180  461068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:06:01.017534  461068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:06:01.026203  461068 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:06:01.026290  461068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:06:01.041051  461068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:06:01.041072  461068 kubeadm.go:157] found existing configuration files:
	
	I1017 20:06:01.041162  461068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:06:01.054937  461068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:06:01.055035  461068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:06:01.065023  461068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:06:01.076303  461068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:06:01.076419  461068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:06:01.083756  461068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:06:01.098903  461068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:06:01.099004  461068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:06:01.110769  461068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:06:01.121872  461068 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:06:01.121969  461068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:06:01.129864  461068 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:06:01.191827  461068 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:06:01.192105  461068 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:06:01.259429  461068 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:06:01.259548  461068 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 20:06:01.259625  461068 kubeadm.go:318] OS: Linux
	I1017 20:06:01.259692  461068 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:06:01.259778  461068 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 20:06:01.259849  461068 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:06:01.259919  461068 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:06:01.259988  461068 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:06:01.260064  461068 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:06:01.260128  461068 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:06:01.260197  461068 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:06:01.260289  461068 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 20:06:01.378202  461068 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:06:01.378365  461068 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:06:01.378490  461068 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:06:01.390375  461068 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 20:06:01.396576  461068 out.go:252]   - Generating certificates and keys ...
	I1017 20:06:01.396700  461068 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:06:01.396802  461068 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:06:01.541903  461068 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:06:01.612884  461068 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:06:02.350489  461068 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:06:00.075754  457767 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:06:00.291716  457767 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:06:01.430405  457767 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:06:01.430918  457767 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:06:02.112978  457767 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:06:02.377045  457767 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:06:02.846077  457767 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:06:03.005812  457767 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:06:04.620826  457767 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:06:04.621918  457767 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:06:04.624820  457767 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:06:04.628605  457767 out.go:252]   - Booting up control plane ...
	I1017 20:06:04.628709  457767 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:06:04.628791  457767 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:06:04.629707  457767 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:06:04.663310  457767 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:06:04.663422  457767 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:06:04.673605  457767 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:06:04.674467  457767 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:06:04.674518  457767 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:06:04.833166  457767 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:06:04.841145  457767 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:06:02.896402  461068 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:06:03.006112  461068 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:06:03.006620  461068 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-572724 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:06:03.587524  461068 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:06:03.587868  461068 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-572724 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:06:04.021652  461068 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:06:04.549437  461068 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:06:04.764419  461068 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:06:04.764496  461068 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:06:04.924940  461068 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:06:05.272873  461068 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:06:05.486932  461068 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:06:05.809136  461068 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:06:06.076984  461068 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:06:06.081098  461068 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:06:06.100911  461068 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:06:06.106958  461068 out.go:252]   - Booting up control plane ...
	I1017 20:06:06.107102  461068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:06:06.107195  461068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:06:06.107450  461068 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:06:06.143818  461068 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:06:06.143932  461068 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:06:06.155342  461068 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:06:06.160916  461068 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:06:06.160979  461068 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:06:06.340539  461068 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:06:06.340670  461068 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:06:05.848183  457767 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.003695133s
	I1017 20:06:05.848298  457767 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:06:05.848425  457767 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1017 20:06:05.848558  457767 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:06:05.848652  457767 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 20:06:07.844869  461068 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500798519s
	I1017 20:06:07.844988  461068 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:06:07.845080  461068 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1017 20:06:07.845179  461068 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:06:07.845265  461068 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 20:06:09.958420  457767 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.110442719s
	I1017 20:06:15.252913  457767 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.405317372s
	I1017 20:06:15.351408  457767 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.503652605s
	I1017 20:06:15.393206  457767 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:06:15.440159  457767 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:06:15.475629  457767 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:06:15.475864  457767 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-413711 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:06:15.509444  457767 kubeadm.go:318] [bootstrap-token] Using token: 735cl2.5baphmeseqi079wf
	I1017 20:06:15.512345  457767 out.go:252]   - Configuring RBAC rules ...
	I1017 20:06:15.512477  457767 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:06:15.535376  457767 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:06:15.561887  457767 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:06:15.575193  457767 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:06:15.582976  457767 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:06:15.592360  457767 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:06:15.786008  457767 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:06:16.197657  457767 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:06:16.758928  457767 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:06:16.760289  457767 kubeadm.go:318] 
	I1017 20:06:16.760370  457767 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:06:16.760382  457767 kubeadm.go:318] 
	I1017 20:06:16.760464  457767 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:06:16.760474  457767 kubeadm.go:318] 
	I1017 20:06:16.760500  457767 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:06:16.760580  457767 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:06:16.760636  457767 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:06:16.760646  457767 kubeadm.go:318] 
	I1017 20:06:16.760702  457767 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:06:16.760709  457767 kubeadm.go:318] 
	I1017 20:06:16.760759  457767 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:06:16.760769  457767 kubeadm.go:318] 
	I1017 20:06:16.760823  457767 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:06:16.760906  457767 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:06:16.760980  457767 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:06:16.760989  457767 kubeadm.go:318] 
	I1017 20:06:16.761077  457767 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:06:16.761161  457767 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:06:16.761172  457767 kubeadm.go:318] 
	I1017 20:06:16.761261  457767 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 735cl2.5baphmeseqi079wf \
	I1017 20:06:16.761372  457767 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 \
	I1017 20:06:16.761398  457767 kubeadm.go:318] 	--control-plane 
	I1017 20:06:16.761406  457767 kubeadm.go:318] 
	I1017 20:06:16.761530  457767 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:06:16.761541  457767 kubeadm.go:318] 
	I1017 20:06:16.761627  457767 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 735cl2.5baphmeseqi079wf \
	I1017 20:06:16.761755  457767 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 
	I1017 20:06:16.769950  457767 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 20:06:16.770212  457767 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 20:06:16.770334  457767 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:06:16.770416  457767 cni.go:84] Creating CNI manager for ""
	I1017 20:06:16.770427  457767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:06:16.775452  457767 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:06:13.462329  461068 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.617648247s
	I1017 20:06:17.058967  461068 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.214704296s
	I1017 20:06:18.846221  461068 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 11.001744282s
	I1017 20:06:18.867440  461068 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:06:18.887604  461068 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:06:18.904620  461068 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:06:18.904836  461068 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-572724 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:06:18.918688  461068 kubeadm.go:318] [bootstrap-token] Using token: wsfyk4.45lxfghtlhwh7qn7
	I1017 20:06:16.778292  457767 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:06:16.785111  457767 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:06:16.785135  457767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:06:16.803825  457767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:06:17.284932  457767 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:06:17.285035  457767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:17.285116  457767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-413711 minikube.k8s.io/updated_at=2025_10_17T20_06_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=no-preload-413711 minikube.k8s.io/primary=true
	I1017 20:06:17.549951  457767 ops.go:34] apiserver oom_adj: -16
	I1017 20:06:17.550079  457767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:18.051049  457767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:18.550126  457767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:19.050887  457767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:19.550646  457767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:18.921638  461068 out.go:252]   - Configuring RBAC rules ...
	I1017 20:06:18.921788  461068 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:06:18.929987  461068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:06:18.952404  461068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:06:18.957682  461068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:06:18.965793  461068 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:06:18.970383  461068 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:06:19.255521  461068 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:06:19.757640  461068 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:06:20.253791  461068 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:06:20.255447  461068 kubeadm.go:318] 
	I1017 20:06:20.255524  461068 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:06:20.255530  461068 kubeadm.go:318] 
	I1017 20:06:20.255612  461068 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:06:20.255616  461068 kubeadm.go:318] 
	I1017 20:06:20.255643  461068 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:06:20.256115  461068 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:06:20.256175  461068 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:06:20.256180  461068 kubeadm.go:318] 
	I1017 20:06:20.256237  461068 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:06:20.256241  461068 kubeadm.go:318] 
	I1017 20:06:20.256292  461068 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:06:20.256296  461068 kubeadm.go:318] 
	I1017 20:06:20.256352  461068 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:06:20.256430  461068 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:06:20.256502  461068 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:06:20.256507  461068 kubeadm.go:318] 
	I1017 20:06:20.256842  461068 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:06:20.256939  461068 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:06:20.256945  461068 kubeadm.go:318] 
	I1017 20:06:20.257260  461068 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token wsfyk4.45lxfghtlhwh7qn7 \
	I1017 20:06:20.257391  461068 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 \
	I1017 20:06:20.257607  461068 kubeadm.go:318] 	--control-plane 
	I1017 20:06:20.257618  461068 kubeadm.go:318] 
	I1017 20:06:20.257898  461068 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:06:20.257908  461068 kubeadm.go:318] 
	I1017 20:06:20.258196  461068 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token wsfyk4.45lxfghtlhwh7qn7 \
	I1017 20:06:20.258503  461068 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 
	I1017 20:06:20.269958  461068 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 20:06:20.270360  461068 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 20:06:20.270550  461068 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:06:20.270588  461068 cni.go:84] Creating CNI manager for ""
	I1017 20:06:20.270629  461068 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:06:20.275889  461068 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:06:20.050707  457767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:20.551144  457767 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:20.725693  457767 kubeadm.go:1113] duration metric: took 3.440720557s to wait for elevateKubeSystemPrivileges
	I1017 20:06:20.725719  457767 kubeadm.go:402] duration metric: took 24.588984289s to StartCluster
	I1017 20:06:20.725736  457767 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:06:20.725797  457767 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:06:20.726466  457767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:06:20.726672  457767 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:06:20.726841  457767 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:06:20.727093  457767 config.go:182] Loaded profile config "no-preload-413711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:06:20.727134  457767 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:06:20.727196  457767 addons.go:69] Setting storage-provisioner=true in profile "no-preload-413711"
	I1017 20:06:20.727211  457767 addons.go:238] Setting addon storage-provisioner=true in "no-preload-413711"
	I1017 20:06:20.727234  457767 host.go:66] Checking if "no-preload-413711" exists ...
	I1017 20:06:20.727724  457767 cli_runner.go:164] Run: docker container inspect no-preload-413711 --format={{.State.Status}}
	I1017 20:06:20.728547  457767 addons.go:69] Setting default-storageclass=true in profile "no-preload-413711"
	I1017 20:06:20.728572  457767 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-413711"
	I1017 20:06:20.728871  457767 cli_runner.go:164] Run: docker container inspect no-preload-413711 --format={{.State.Status}}
	I1017 20:06:20.731493  457767 out.go:179] * Verifying Kubernetes components...
	I1017 20:06:20.734390  457767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:06:20.768902  457767 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:06:20.278799  461068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:06:20.283186  461068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:06:20.283212  461068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:06:20.305446  461068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:06:21.101787  461068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:06:21.101925  461068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:21.102000  461068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-572724 minikube.k8s.io/updated_at=2025_10_17T20_06_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=embed-certs-572724 minikube.k8s.io/primary=true
	I1017 20:06:21.433865  461068 ops.go:34] apiserver oom_adj: -16
	I1017 20:06:21.433975  461068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:21.934990  461068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:22.434711  461068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:20.771789  457767 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:06:20.771812  457767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:06:20.771876  457767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:06:20.774067  457767 addons.go:238] Setting addon default-storageclass=true in "no-preload-413711"
	I1017 20:06:20.774104  457767 host.go:66] Checking if "no-preload-413711" exists ...
	I1017 20:06:20.774535  457767 cli_runner.go:164] Run: docker container inspect no-preload-413711 --format={{.State.Status}}
	I1017 20:06:20.816706  457767 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:06:20.816729  457767 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:06:20.816796  457767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:06:20.822460  457767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:06:20.850226  457767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:06:21.302243  457767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:06:21.323400  457767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:06:21.363693  457767 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:06:21.363901  457767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:06:22.640980  457767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.317490827s)
	I1017 20:06:22.641166  457767 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.277218346s)
	I1017 20:06:22.641413  457767 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.27764665s)
	I1017 20:06:22.641436  457767 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1017 20:06:22.643281  457767 node_ready.go:35] waiting up to 6m0s for node "no-preload-413711" to be "Ready" ...
	I1017 20:06:22.644404  457767 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1017 20:06:22.934223  461068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:23.434976  461068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:23.934640  461068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:06:24.209160  461068 kubeadm.go:1113] duration metric: took 3.107278335s to wait for elevateKubeSystemPrivileges
	I1017 20:06:24.209192  461068 kubeadm.go:402] duration metric: took 23.23804388s to StartCluster
	I1017 20:06:24.209208  461068 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:06:24.209264  461068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:06:24.210662  461068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:06:24.210890  461068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:06:24.211022  461068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:06:24.211287  461068 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:06:24.211320  461068 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:06:24.211382  461068 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-572724"
	I1017 20:06:24.211398  461068 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-572724"
	I1017 20:06:24.211423  461068 host.go:66] Checking if "embed-certs-572724" exists ...
	I1017 20:06:24.211917  461068 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:06:24.212449  461068 addons.go:69] Setting default-storageclass=true in profile "embed-certs-572724"
	I1017 20:06:24.212468  461068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-572724"
	I1017 20:06:24.212747  461068 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:06:24.221516  461068 out.go:179] * Verifying Kubernetes components...
	I1017 20:06:24.223646  461068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:06:24.254489  461068 addons.go:238] Setting addon default-storageclass=true in "embed-certs-572724"
	I1017 20:06:24.254535  461068 host.go:66] Checking if "embed-certs-572724" exists ...
	I1017 20:06:24.254956  461068 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:06:24.263520  461068 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:06:22.647497  457767 addons.go:514] duration metric: took 1.920353236s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1017 20:06:23.147949  457767 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-413711" context rescaled to 1 replicas
	W1017 20:06:24.646865  457767 node_ready.go:57] node "no-preload-413711" has "Ready":"False" status (will retry)
	I1017 20:06:24.267978  461068 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:06:24.268006  461068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:06:24.268078  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:06:24.295026  461068 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:06:24.295053  461068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:06:24.295121  461068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:06:24.310927  461068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:06:24.330749  461068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:06:24.784408  461068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:06:24.789164  461068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:06:24.789268  461068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:06:24.831333  461068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:06:26.160511  461068 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.371219373s)
	I1017 20:06:26.161651  461068 node_ready.go:35] waiting up to 6m0s for node "embed-certs-572724" to be "Ready" ...
	I1017 20:06:26.161890  461068 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.372703165s)
	I1017 20:06:26.161912  461068 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1017 20:06:26.429603  461068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.598168587s)
	I1017 20:06:26.434920  461068 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1017 20:06:26.437704  461068 addons.go:514] duration metric: took 2.226372394s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1017 20:06:26.666262  461068 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-572724" context rescaled to 1 replicas
	W1017 20:06:27.146264  457767 node_ready.go:57] node "no-preload-413711" has "Ready":"False" status (will retry)
	W1017 20:06:29.646030  457767 node_ready.go:57] node "no-preload-413711" has "Ready":"False" status (will retry)
	W1017 20:06:28.165439  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	W1017 20:06:30.665184  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	W1017 20:06:31.646742  457767 node_ready.go:57] node "no-preload-413711" has "Ready":"False" status (will retry)
	W1017 20:06:34.147543  457767 node_ready.go:57] node "no-preload-413711" has "Ready":"False" status (will retry)
	W1017 20:06:32.665404  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	W1017 20:06:35.164482  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	W1017 20:06:36.646181  457767 node_ready.go:57] node "no-preload-413711" has "Ready":"False" status (will retry)
	I1017 20:06:37.146336  457767 node_ready.go:49] node "no-preload-413711" is "Ready"
	I1017 20:06:37.146367  457767 node_ready.go:38] duration metric: took 14.503068143s for node "no-preload-413711" to be "Ready" ...
	I1017 20:06:37.146381  457767 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:06:37.146443  457767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:06:37.172761  457767 api_server.go:72] duration metric: took 16.446060163s to wait for apiserver process to appear ...
	I1017 20:06:37.172790  457767 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:06:37.172812  457767 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:06:37.181237  457767 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 20:06:37.182331  457767 api_server.go:141] control plane version: v1.34.1
	I1017 20:06:37.182362  457767 api_server.go:131] duration metric: took 9.559597ms to wait for apiserver health ...
	I1017 20:06:37.182372  457767 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:06:37.185637  457767 system_pods.go:59] 8 kube-system pods found
	I1017 20:06:37.185672  457767 system_pods.go:61] "coredns-66bc5c9577-4bslb" [be4a3950-c683-4860-a96b-c48c9db546ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:06:37.185678  457767 system_pods.go:61] "etcd-no-preload-413711" [68a6798f-bea7-4d3f-b842-c2bbcc9fd338] Running
	I1017 20:06:37.185685  457767 system_pods.go:61] "kindnet-7jkvq" [a848c0df-632d-4733-9f76-1ed315cae3be] Running
	I1017 20:06:37.185690  457767 system_pods.go:61] "kube-apiserver-no-preload-413711" [2e789da4-e54f-4641-9fe0-0c9b84c006ac] Running
	I1017 20:06:37.185695  457767 system_pods.go:61] "kube-controller-manager-no-preload-413711" [1edb18bd-3e00-4c28-be3c-1e15ec28992a] Running
	I1017 20:06:37.185699  457767 system_pods.go:61] "kube-proxy-kl48k" [30ab540f-a82e-479b-956b-1b7596cf1561] Running
	I1017 20:06:37.185703  457767 system_pods.go:61] "kube-scheduler-no-preload-413711" [61f47adf-393d-4916-a1e0-326db562bb59] Running
	I1017 20:06:37.185710  457767 system_pods.go:61] "storage-provisioner" [9b892a85-762b-434f-a48c-1ff1266c2b06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:06:37.185716  457767 system_pods.go:74] duration metric: took 3.338574ms to wait for pod list to return data ...
	I1017 20:06:37.185727  457767 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:06:37.188267  457767 default_sa.go:45] found service account: "default"
	I1017 20:06:37.188292  457767 default_sa.go:55] duration metric: took 2.559585ms for default service account to be created ...
	I1017 20:06:37.188302  457767 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:06:37.190928  457767 system_pods.go:86] 8 kube-system pods found
	I1017 20:06:37.190959  457767 system_pods.go:89] "coredns-66bc5c9577-4bslb" [be4a3950-c683-4860-a96b-c48c9db546ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:06:37.190992  457767 system_pods.go:89] "etcd-no-preload-413711" [68a6798f-bea7-4d3f-b842-c2bbcc9fd338] Running
	I1017 20:06:37.191010  457767 system_pods.go:89] "kindnet-7jkvq" [a848c0df-632d-4733-9f76-1ed315cae3be] Running
	I1017 20:06:37.191015  457767 system_pods.go:89] "kube-apiserver-no-preload-413711" [2e789da4-e54f-4641-9fe0-0c9b84c006ac] Running
	I1017 20:06:37.191020  457767 system_pods.go:89] "kube-controller-manager-no-preload-413711" [1edb18bd-3e00-4c28-be3c-1e15ec28992a] Running
	I1017 20:06:37.191024  457767 system_pods.go:89] "kube-proxy-kl48k" [30ab540f-a82e-479b-956b-1b7596cf1561] Running
	I1017 20:06:37.191029  457767 system_pods.go:89] "kube-scheduler-no-preload-413711" [61f47adf-393d-4916-a1e0-326db562bb59] Running
	I1017 20:06:37.191038  457767 system_pods.go:89] "storage-provisioner" [9b892a85-762b-434f-a48c-1ff1266c2b06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:06:37.191063  457767 retry.go:31] will retry after 189.191649ms: missing components: kube-dns
	I1017 20:06:37.387946  457767 system_pods.go:86] 8 kube-system pods found
	I1017 20:06:37.387983  457767 system_pods.go:89] "coredns-66bc5c9577-4bslb" [be4a3950-c683-4860-a96b-c48c9db546ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:06:37.387992  457767 system_pods.go:89] "etcd-no-preload-413711" [68a6798f-bea7-4d3f-b842-c2bbcc9fd338] Running
	I1017 20:06:37.387999  457767 system_pods.go:89] "kindnet-7jkvq" [a848c0df-632d-4733-9f76-1ed315cae3be] Running
	I1017 20:06:37.388034  457767 system_pods.go:89] "kube-apiserver-no-preload-413711" [2e789da4-e54f-4641-9fe0-0c9b84c006ac] Running
	I1017 20:06:37.388047  457767 system_pods.go:89] "kube-controller-manager-no-preload-413711" [1edb18bd-3e00-4c28-be3c-1e15ec28992a] Running
	I1017 20:06:37.388053  457767 system_pods.go:89] "kube-proxy-kl48k" [30ab540f-a82e-479b-956b-1b7596cf1561] Running
	I1017 20:06:37.388057  457767 system_pods.go:89] "kube-scheduler-no-preload-413711" [61f47adf-393d-4916-a1e0-326db562bb59] Running
	I1017 20:06:37.388063  457767 system_pods.go:89] "storage-provisioner" [9b892a85-762b-434f-a48c-1ff1266c2b06] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:06:37.388084  457767 retry.go:31] will retry after 373.75005ms: missing components: kube-dns
	I1017 20:06:37.767144  457767 system_pods.go:86] 8 kube-system pods found
	I1017 20:06:37.767183  457767 system_pods.go:89] "coredns-66bc5c9577-4bslb" [be4a3950-c683-4860-a96b-c48c9db546ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:06:37.767194  457767 system_pods.go:89] "etcd-no-preload-413711" [68a6798f-bea7-4d3f-b842-c2bbcc9fd338] Running
	I1017 20:06:37.767200  457767 system_pods.go:89] "kindnet-7jkvq" [a848c0df-632d-4733-9f76-1ed315cae3be] Running
	I1017 20:06:37.767205  457767 system_pods.go:89] "kube-apiserver-no-preload-413711" [2e789da4-e54f-4641-9fe0-0c9b84c006ac] Running
	I1017 20:06:37.767210  457767 system_pods.go:89] "kube-controller-manager-no-preload-413711" [1edb18bd-3e00-4c28-be3c-1e15ec28992a] Running
	I1017 20:06:37.767214  457767 system_pods.go:89] "kube-proxy-kl48k" [30ab540f-a82e-479b-956b-1b7596cf1561] Running
	I1017 20:06:37.767219  457767 system_pods.go:89] "kube-scheduler-no-preload-413711" [61f47adf-393d-4916-a1e0-326db562bb59] Running
	I1017 20:06:37.767223  457767 system_pods.go:89] "storage-provisioner" [9b892a85-762b-434f-a48c-1ff1266c2b06] Running
	I1017 20:06:37.767236  457767 system_pods.go:126] duration metric: took 578.928902ms to wait for k8s-apps to be running ...
	I1017 20:06:37.767247  457767 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:06:37.767313  457767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:06:37.784632  457767 system_svc.go:56] duration metric: took 17.37475ms WaitForService to wait for kubelet
	I1017 20:06:37.784658  457767 kubeadm.go:586] duration metric: took 17.057962736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:06:37.784676  457767 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:06:37.787395  457767 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:06:37.787425  457767 node_conditions.go:123] node cpu capacity is 2
	I1017 20:06:37.787438  457767 node_conditions.go:105] duration metric: took 2.757118ms to run NodePressure ...
	I1017 20:06:37.787449  457767 start.go:241] waiting for startup goroutines ...
	I1017 20:06:37.787458  457767 start.go:246] waiting for cluster config update ...
	I1017 20:06:37.787470  457767 start.go:255] writing updated cluster config ...
	I1017 20:06:37.787757  457767 ssh_runner.go:195] Run: rm -f paused
	I1017 20:06:37.792132  457767 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:06:37.795832  457767 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4bslb" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:38.801739  457767 pod_ready.go:94] pod "coredns-66bc5c9577-4bslb" is "Ready"
	I1017 20:06:38.801776  457767 pod_ready.go:86] duration metric: took 1.005919592s for pod "coredns-66bc5c9577-4bslb" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:38.804566  457767 pod_ready.go:83] waiting for pod "etcd-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:38.810528  457767 pod_ready.go:94] pod "etcd-no-preload-413711" is "Ready"
	I1017 20:06:38.810554  457767 pod_ready.go:86] duration metric: took 5.951106ms for pod "etcd-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:38.813112  457767 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:38.817767  457767 pod_ready.go:94] pod "kube-apiserver-no-preload-413711" is "Ready"
	I1017 20:06:38.817796  457767 pod_ready.go:86] duration metric: took 4.656635ms for pod "kube-apiserver-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:38.820096  457767 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:38.999675  457767 pod_ready.go:94] pod "kube-controller-manager-no-preload-413711" is "Ready"
	I1017 20:06:38.999705  457767 pod_ready.go:86] duration metric: took 179.525743ms for pod "kube-controller-manager-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:39.199890  457767 pod_ready.go:83] waiting for pod "kube-proxy-kl48k" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:39.599414  457767 pod_ready.go:94] pod "kube-proxy-kl48k" is "Ready"
	I1017 20:06:39.599480  457767 pod_ready.go:86] duration metric: took 399.5645ms for pod "kube-proxy-kl48k" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:39.799895  457767 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:40.200343  457767 pod_ready.go:94] pod "kube-scheduler-no-preload-413711" is "Ready"
	I1017 20:06:40.200373  457767 pod_ready.go:86] duration metric: took 400.448563ms for pod "kube-scheduler-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:06:40.200387  457767 pod_ready.go:40] duration metric: took 2.408225525s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:06:40.262460  457767 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:06:40.265360  457767 out.go:179] * Done! kubectl is now configured to use "no-preload-413711" cluster and "default" namespace by default
	W1017 20:06:37.664684  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	W1017 20:06:39.664746  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	W1017 20:06:42.165782  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	W1017 20:06:44.665648  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	W1017 20:06:47.164471  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 17 20:06:37 no-preload-413711 crio[838]: time="2025-10-17T20:06:37.501100995Z" level=info msg="Created container ec78332e3db2f710d06094185b8d3a9c2f02c8b0e7726e053e9abacb16de9a59: kube-system/coredns-66bc5c9577-4bslb/coredns" id=f41b94a6-859e-4b13-a04f-0c71dc442d30 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:06:37 no-preload-413711 crio[838]: time="2025-10-17T20:06:37.506679926Z" level=info msg="Starting container: ec78332e3db2f710d06094185b8d3a9c2f02c8b0e7726e053e9abacb16de9a59" id=aa542aa0-60e5-4508-ab48-a7e75bd964a6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:06:37 no-preload-413711 crio[838]: time="2025-10-17T20:06:37.509455628Z" level=info msg="Started container" PID=2498 containerID=ec78332e3db2f710d06094185b8d3a9c2f02c8b0e7726e053e9abacb16de9a59 description=kube-system/coredns-66bc5c9577-4bslb/coredns id=aa542aa0-60e5-4508-ab48-a7e75bd964a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e027ca1c39cec136d8c6ec1f4f5a43536b5e12b36068df4b608f79b8f09e65c1
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.811592485Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a74d56bf-2b31-4930-a296-a9ca219a02cd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.811656836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.822713294Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a13dd7be6cdbf3f924370b62761040b55b548924ce576a24bf8924ee58278aba UID:e8776954-7870-4b04-a178-bc73c09ccec1 NetNS:/var/run/netns/7ad6db96-f7d3-4e7b-9d28-5d2287cfe864 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001474df0}] Aliases:map[]}"
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.826247785Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.834708254Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a13dd7be6cdbf3f924370b62761040b55b548924ce576a24bf8924ee58278aba UID:e8776954-7870-4b04-a178-bc73c09ccec1 NetNS:/var/run/netns/7ad6db96-f7d3-4e7b-9d28-5d2287cfe864 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001474df0}] Aliases:map[]}"
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.834851995Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.841345289Z" level=info msg="Ran pod sandbox a13dd7be6cdbf3f924370b62761040b55b548924ce576a24bf8924ee58278aba with infra container: default/busybox/POD" id=a74d56bf-2b31-4930-a296-a9ca219a02cd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.842694003Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1a87b53d-9406-4ff3-a416-bba33cee5aac name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.842837129Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1a87b53d-9406-4ff3-a416-bba33cee5aac name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.842886244Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1a87b53d-9406-4ff3-a416-bba33cee5aac name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.843617817Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a0e21f6d-7f7b-43a6-981a-823c7450a17f name=/runtime.v1.ImageService/PullImage
	Oct 17 20:06:40 no-preload-413711 crio[838]: time="2025-10-17T20:06:40.845480839Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 20:06:42 no-preload-413711 crio[838]: time="2025-10-17T20:06:42.821270401Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a0e21f6d-7f7b-43a6-981a-823c7450a17f name=/runtime.v1.ImageService/PullImage
	Oct 17 20:06:42 no-preload-413711 crio[838]: time="2025-10-17T20:06:42.821829376Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eabbf85d-3bb9-4954-a4ca-88c89ac7e086 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:06:42 no-preload-413711 crio[838]: time="2025-10-17T20:06:42.825115349Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=05d211b9-906a-4636-b17e-f5a16f8d7706 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:06:42 no-preload-413711 crio[838]: time="2025-10-17T20:06:42.833052115Z" level=info msg="Creating container: default/busybox/busybox" id=66afe104-f3d5-467f-9eee-bc69182628ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:06:42 no-preload-413711 crio[838]: time="2025-10-17T20:06:42.833835035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:06:42 no-preload-413711 crio[838]: time="2025-10-17T20:06:42.83846708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:06:42 no-preload-413711 crio[838]: time="2025-10-17T20:06:42.838945737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:06:42 no-preload-413711 crio[838]: time="2025-10-17T20:06:42.853698291Z" level=info msg="Created container 042ffb3dfa013c8449f2410734fa73594bc0124ec7266b8e25f569aba48bf6c8: default/busybox/busybox" id=66afe104-f3d5-467f-9eee-bc69182628ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:06:42 no-preload-413711 crio[838]: time="2025-10-17T20:06:42.854678596Z" level=info msg="Starting container: 042ffb3dfa013c8449f2410734fa73594bc0124ec7266b8e25f569aba48bf6c8" id=483dbaf3-b5b5-4053-8653-9e8e08576102 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:06:42 no-preload-413711 crio[838]: time="2025-10-17T20:06:42.857816101Z" level=info msg="Started container" PID=2549 containerID=042ffb3dfa013c8449f2410734fa73594bc0124ec7266b8e25f569aba48bf6c8 description=default/busybox/busybox id=483dbaf3-b5b5-4053-8653-9e8e08576102 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a13dd7be6cdbf3f924370b62761040b55b548924ce576a24bf8924ee58278aba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	042ffb3dfa013       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   a13dd7be6cdbf       busybox                                     default
	ec78332e3db2f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   e027ca1c39cec       coredns-66bc5c9577-4bslb                    kube-system
	2dac7585e858e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   adc53d298dfb4       storage-provisioner                         kube-system
	5b3a67ac28353       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   f260f24d1df00       kindnet-7jkvq                               kube-system
	2b59adcdce025       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      29 seconds ago      Running             kube-proxy                0                   0039838c1c81f       kube-proxy-kl48k                            kube-system
	1f6b7830106d7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   2087af971044e       kube-controller-manager-no-preload-413711   kube-system
	3a7a32a268e76       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   e453729116013       kube-apiserver-no-preload-413711            kube-system
	d65de72f62118       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   8e6a30b3dbc7e       kube-scheduler-no-preload-413711            kube-system
	756c8a5a09475       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   1c56a10cad762       etcd-no-preload-413711                      kube-system
	
	
	==> coredns [ec78332e3db2f710d06094185b8d3a9c2f02c8b0e7726e053e9abacb16de9a59] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60458 - 53090 "HINFO IN 1377034106071904930.2738848755075568578. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021780938s
	
	
	==> describe nodes <==
	Name:               no-preload-413711
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-413711
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=no-preload-413711
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_06_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:06:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-413711
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:06:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:06:47 +0000   Fri, 17 Oct 2025 20:06:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:06:47 +0000   Fri, 17 Oct 2025 20:06:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:06:47 +0000   Fri, 17 Oct 2025 20:06:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:06:47 +0000   Fri, 17 Oct 2025 20:06:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-413711
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b8affef4-ca65-41f6-ac3b-b82ba141b1e4
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-4bslb                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-413711                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-7jkvq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-413711             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-413711    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-kl48k                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-413711             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node no-preload-413711 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node no-preload-413711 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node no-preload-413711 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-413711 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-413711 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-413711 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-413711 event: Registered Node no-preload-413711 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-413711 status is now: NodeReady
	
	
	==> dmesg <==
	[ +34.896999] overlayfs: idmapped layers are currently not supported
	[Oct17 19:42] overlayfs: idmapped layers are currently not supported
	[Oct17 19:43] overlayfs: idmapped layers are currently not supported
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [756c8a5a09475fcfb871240f4e305fc80513aa16c675b80310bd1353c86262df] <==
	{"level":"warn","ts":"2025-10-17T20:06:09.544067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:09.600660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:09.645792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:09.688595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:09.745358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:09.792602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:09.891027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:09.951341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:09.985682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.035938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.049273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.094018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.121762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.159381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.199084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.229724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.259365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.289202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.323180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.358109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.413093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.440267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.473090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.505867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:10.727493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51774","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:06:51 up  2:49,  0 user,  load average: 5.08, 3.91, 2.97
	Linux no-preload-413711 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b3a67ac28353f83494623a58ee1b335fc39ca950c250ab9621366a0093a9268] <==
	I1017 20:06:26.415248       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:06:26.415495       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:06:26.415616       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:06:26.415635       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:06:26.415650       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:06:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:06:26.614588       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:06:26.614657       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:06:26.614668       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:06:26.615576       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:06:26.814790       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:06:26.814820       1 metrics.go:72] Registering metrics
	I1017 20:06:26.814892       1 controller.go:711] "Syncing nftables rules"
	I1017 20:06:36.620599       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:06:36.620649       1 main.go:301] handling current node
	I1017 20:06:46.615682       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:06:46.615718       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3a7a32a268e76f15c1193587b29f53c7d3d66d2222b590da4ffd804b86b0232f] <==
	I1017 20:06:12.832817       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:06:12.838996       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:06:12.842872       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 20:06:12.961345       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:06:12.962152       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:06:12.973274       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:06:12.980719       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 20:06:13.141343       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:06:13.163691       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:06:13.172544       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:06:14.868138       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:06:14.970525       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:06:15.133109       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:06:15.154366       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1017 20:06:15.161692       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:06:15.186269       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:06:15.318944       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:06:16.172910       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:06:16.196572       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:06:16.206828       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:06:20.583055       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:06:21.097962       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:06:21.122654       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:06:21.241449       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1017 20:06:49.621994       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:56024: use of closed network connection
	
	
	==> kube-controller-manager [1f6b7830106d75b6e18dd189409987818f781d92abe6ca9cd459e8863972992b] <==
	I1017 20:06:20.319943       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:06:20.321039       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:06:20.323527       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 20:06:20.323669       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:06:20.325375       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:06:20.337338       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:06:20.348632       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 20:06:20.357052       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:06:20.364648       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:06:20.364844       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:06:20.364870       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:06:20.364903       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:06:20.364919       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:06:20.367396       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:06:20.368026       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:06:20.368094       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:06:20.368153       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:06:20.370864       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:06:20.376635       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:06:20.389340       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:06:20.416039       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:06:20.416114       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:06:20.416121       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:06:20.416128       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:06:40.318071       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2b59adcdce02554096d5e16e29befd6fc9f72557aaaa80c29106bc7eea4b3504] <==
	I1017 20:06:22.270704       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:06:22.366293       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:06:22.468626       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:06:22.468675       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:06:22.468834       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:06:22.656506       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:06:22.656575       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:06:22.687304       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:06:22.687590       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:06:22.687604       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:06:22.689497       1 config.go:200] "Starting service config controller"
	I1017 20:06:22.689508       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:06:22.689531       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:06:22.689535       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:06:22.689547       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:06:22.689551       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:06:22.694150       1 config.go:309] "Starting node config controller"
	I1017 20:06:22.694175       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:06:22.790119       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:06:22.790157       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:06:22.790213       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:06:22.794765       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [d65de72f6211824eadef7fab81c2f6c53b608c74b09f6ff4342c590eb227d6c3] <==
	I1017 20:06:10.128681       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:06:15.196451       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:06:15.208957       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:06:15.218529       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:06:15.218715       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:06:15.220578       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:06:15.218729       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:06:15.224039       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:06:15.218742       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:06:15.218678       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:06:15.236835       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:06:15.321004       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:06:15.324939       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:06:15.338011       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 17 20:06:20 no-preload-413711 kubelet[2018]: I1017 20:06:20.382286    2018 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 20:06:21 no-preload-413711 kubelet[2018]: I1017 20:06:21.489872    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/30ab540f-a82e-479b-956b-1b7596cf1561-kube-proxy\") pod \"kube-proxy-kl48k\" (UID: \"30ab540f-a82e-479b-956b-1b7596cf1561\") " pod="kube-system/kube-proxy-kl48k"
	Oct 17 20:06:21 no-preload-413711 kubelet[2018]: I1017 20:06:21.489984    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a848c0df-632d-4733-9f76-1ed315cae3be-lib-modules\") pod \"kindnet-7jkvq\" (UID: \"a848c0df-632d-4733-9f76-1ed315cae3be\") " pod="kube-system/kindnet-7jkvq"
	Oct 17 20:06:21 no-preload-413711 kubelet[2018]: I1017 20:06:21.490003    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30ab540f-a82e-479b-956b-1b7596cf1561-xtables-lock\") pod \"kube-proxy-kl48k\" (UID: \"30ab540f-a82e-479b-956b-1b7596cf1561\") " pod="kube-system/kube-proxy-kl48k"
	Oct 17 20:06:21 no-preload-413711 kubelet[2018]: I1017 20:06:21.490020    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a848c0df-632d-4733-9f76-1ed315cae3be-cni-cfg\") pod \"kindnet-7jkvq\" (UID: \"a848c0df-632d-4733-9f76-1ed315cae3be\") " pod="kube-system/kindnet-7jkvq"
	Oct 17 20:06:21 no-preload-413711 kubelet[2018]: I1017 20:06:21.490074    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a848c0df-632d-4733-9f76-1ed315cae3be-xtables-lock\") pod \"kindnet-7jkvq\" (UID: \"a848c0df-632d-4733-9f76-1ed315cae3be\") " pod="kube-system/kindnet-7jkvq"
	Oct 17 20:06:21 no-preload-413711 kubelet[2018]: I1017 20:06:21.490096    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30ab540f-a82e-479b-956b-1b7596cf1561-lib-modules\") pod \"kube-proxy-kl48k\" (UID: \"30ab540f-a82e-479b-956b-1b7596cf1561\") " pod="kube-system/kube-proxy-kl48k"
	Oct 17 20:06:21 no-preload-413711 kubelet[2018]: I1017 20:06:21.490147    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w25rh\" (UniqueName: \"kubernetes.io/projected/30ab540f-a82e-479b-956b-1b7596cf1561-kube-api-access-w25rh\") pod \"kube-proxy-kl48k\" (UID: \"30ab540f-a82e-479b-956b-1b7596cf1561\") " pod="kube-system/kube-proxy-kl48k"
	Oct 17 20:06:21 no-preload-413711 kubelet[2018]: I1017 20:06:21.490178    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl96m\" (UniqueName: \"kubernetes.io/projected/a848c0df-632d-4733-9f76-1ed315cae3be-kube-api-access-cl96m\") pod \"kindnet-7jkvq\" (UID: \"a848c0df-632d-4733-9f76-1ed315cae3be\") " pod="kube-system/kindnet-7jkvq"
	Oct 17 20:06:21 no-preload-413711 kubelet[2018]: I1017 20:06:21.686308    2018 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:06:22 no-preload-413711 kubelet[2018]: W1017 20:06:22.005320    2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/crio-f260f24d1df001b07813a816d75a67d3b9d4ce07c99f57ae190db05eaee5081d WatchSource:0}: Error finding container f260f24d1df001b07813a816d75a67d3b9d4ce07c99f57ae190db05eaee5081d: Status 404 returned error can't find the container with id f260f24d1df001b07813a816d75a67d3b9d4ce07c99f57ae190db05eaee5081d
	Oct 17 20:06:22 no-preload-413711 kubelet[2018]: W1017 20:06:22.016262    2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/crio-0039838c1c81fdc218dcff6e3f5ae04acc48a5649e531a1e33e874cd72da5d10 WatchSource:0}: Error finding container 0039838c1c81fdc218dcff6e3f5ae04acc48a5649e531a1e33e874cd72da5d10: Status 404 returned error can't find the container with id 0039838c1c81fdc218dcff6e3f5ae04acc48a5649e531a1e33e874cd72da5d10
	Oct 17 20:06:22 no-preload-413711 kubelet[2018]: I1017 20:06:22.659658    2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kl48k" podStartSLOduration=1.659641894 podStartE2EDuration="1.659641894s" podCreationTimestamp="2025-10-17 20:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:06:22.60745089 +0000 UTC m=+6.520467981" watchObservedRunningTime="2025-10-17 20:06:22.659641894 +0000 UTC m=+6.572658985"
	Oct 17 20:06:37 no-preload-413711 kubelet[2018]: I1017 20:06:37.027193    2018 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 20:06:37 no-preload-413711 kubelet[2018]: I1017 20:06:37.067302    2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7jkvq" podStartSLOduration=11.758345224 podStartE2EDuration="16.067284696s" podCreationTimestamp="2025-10-17 20:06:21 +0000 UTC" firstStartedPulling="2025-10-17 20:06:22.019943093 +0000 UTC m=+5.932960185" lastFinishedPulling="2025-10-17 20:06:26.328882565 +0000 UTC m=+10.241899657" observedRunningTime="2025-10-17 20:06:26.577432085 +0000 UTC m=+10.490449194" watchObservedRunningTime="2025-10-17 20:06:37.067284696 +0000 UTC m=+20.980301804"
	Oct 17 20:06:37 no-preload-413711 kubelet[2018]: I1017 20:06:37.130881    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxd6d\" (UniqueName: \"kubernetes.io/projected/be4a3950-c683-4860-a96b-c48c9db546ea-kube-api-access-gxd6d\") pod \"coredns-66bc5c9577-4bslb\" (UID: \"be4a3950-c683-4860-a96b-c48c9db546ea\") " pod="kube-system/coredns-66bc5c9577-4bslb"
	Oct 17 20:06:37 no-preload-413711 kubelet[2018]: I1017 20:06:37.130938    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9b892a85-762b-434f-a48c-1ff1266c2b06-tmp\") pod \"storage-provisioner\" (UID: \"9b892a85-762b-434f-a48c-1ff1266c2b06\") " pod="kube-system/storage-provisioner"
	Oct 17 20:06:37 no-preload-413711 kubelet[2018]: I1017 20:06:37.130959    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zfhf\" (UniqueName: \"kubernetes.io/projected/9b892a85-762b-434f-a48c-1ff1266c2b06-kube-api-access-5zfhf\") pod \"storage-provisioner\" (UID: \"9b892a85-762b-434f-a48c-1ff1266c2b06\") " pod="kube-system/storage-provisioner"
	Oct 17 20:06:37 no-preload-413711 kubelet[2018]: I1017 20:06:37.130976    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be4a3950-c683-4860-a96b-c48c9db546ea-config-volume\") pod \"coredns-66bc5c9577-4bslb\" (UID: \"be4a3950-c683-4860-a96b-c48c9db546ea\") " pod="kube-system/coredns-66bc5c9577-4bslb"
	Oct 17 20:06:37 no-preload-413711 kubelet[2018]: W1017 20:06:37.391496    2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/crio-adc53d298dfb4e718d06f347703d641d209cbee2f8e88e072a935272a1667558 WatchSource:0}: Error finding container adc53d298dfb4e718d06f347703d641d209cbee2f8e88e072a935272a1667558: Status 404 returned error can't find the container with id adc53d298dfb4e718d06f347703d641d209cbee2f8e88e072a935272a1667558
	Oct 17 20:06:37 no-preload-413711 kubelet[2018]: W1017 20:06:37.437336    2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/crio-e027ca1c39cec136d8c6ec1f4f5a43536b5e12b36068df4b608f79b8f09e65c1 WatchSource:0}: Error finding container e027ca1c39cec136d8c6ec1f4f5a43536b5e12b36068df4b608f79b8f09e65c1: Status 404 returned error can't find the container with id e027ca1c39cec136d8c6ec1f4f5a43536b5e12b36068df4b608f79b8f09e65c1
	Oct 17 20:06:37 no-preload-413711 kubelet[2018]: I1017 20:06:37.626721    2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4bslb" podStartSLOduration=16.62670184 podStartE2EDuration="16.62670184s" podCreationTimestamp="2025-10-17 20:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:06:37.606581664 +0000 UTC m=+21.519598773" watchObservedRunningTime="2025-10-17 20:06:37.62670184 +0000 UTC m=+21.539718932"
	Oct 17 20:06:38 no-preload-413711 kubelet[2018]: I1017 20:06:38.602715    2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.60269566 podStartE2EDuration="16.60269566s" podCreationTimestamp="2025-10-17 20:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:06:37.628028277 +0000 UTC m=+21.541045385" watchObservedRunningTime="2025-10-17 20:06:38.60269566 +0000 UTC m=+22.515712760"
	Oct 17 20:06:40 no-preload-413711 kubelet[2018]: I1017 20:06:40.556084    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4vpl\" (UniqueName: \"kubernetes.io/projected/e8776954-7870-4b04-a178-bc73c09ccec1-kube-api-access-p4vpl\") pod \"busybox\" (UID: \"e8776954-7870-4b04-a178-bc73c09ccec1\") " pod="default/busybox"
	Oct 17 20:06:40 no-preload-413711 kubelet[2018]: W1017 20:06:40.838791    2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/crio-a13dd7be6cdbf3f924370b62761040b55b548924ce576a24bf8924ee58278aba WatchSource:0}: Error finding container a13dd7be6cdbf3f924370b62761040b55b548924ce576a24bf8924ee58278aba: Status 404 returned error can't find the container with id a13dd7be6cdbf3f924370b62761040b55b548924ce576a24bf8924ee58278aba
	
	
	==> storage-provisioner [2dac7585e858ea6224c2e0bf04069add83549080b20030db6defafeec9f672ed] <==
	I1017 20:06:37.463387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:06:37.477428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:06:37.477496       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:06:37.485555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:37.500870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:06:37.501111       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:06:37.501484       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-413711_69b53426-7708-40ac-b2cb-392ef68d3e9e!
	I1017 20:06:37.501620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7364bcd8-86db-45e0-9833-9e2841aa3bab", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-413711_69b53426-7708-40ac-b2cb-392ef68d3e9e became leader
	W1017 20:06:37.511615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:37.532700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:06:37.602248       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-413711_69b53426-7708-40ac-b2cb-392ef68d3e9e!
	W1017 20:06:39.537166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:39.541422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:41.544374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:41.548747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:43.551808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:43.558206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:45.561778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:45.569916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:47.573562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:47.580207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:49.583167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:49.588228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:51.592271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:51.599138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-413711 -n no-preload-413711
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-413711 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (262.777481ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:07:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-572724 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-572724 describe deploy/metrics-server -n kube-system: exit status 1 (133.975877ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-572724 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-572724
helpers_test.go:243: (dbg) docker inspect embed-certs-572724:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e",
	        "Created": "2025-10-17T20:05:49.604188435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 461562,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:05:50.55964541Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/hosts",
	        "LogPath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e-json.log",
	        "Name": "/embed-certs-572724",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-572724:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-572724",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e",
	                "LowerDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-572724",
	                "Source": "/var/lib/docker/volumes/embed-certs-572724/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-572724",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-572724",
	                "name.minikube.sigs.k8s.io": "embed-certs-572724",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "951963f7ef2d0c28e43d71fee75676154bf686728e25ae0db5a619ff074ea707",
	            "SandboxKey": "/var/run/docker/netns/951963f7ef2d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-572724": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:9a:7d:da:b2:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1786ab454405791896f6daa543404507b38480aaf90e1b61a39fa7a7767ad3ab",
	                    "EndpointID": "939bc6ae67fc8867903dce9b0b86542f7f58ed8b9e6f431064628a74a850b10c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-572724",
	                        "6c48c7c23063"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-572724 -n embed-certs-572724
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-572724 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-572724 logs -n 25: (1.498448795s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-285387                                                                                                                                                                                                                  │ force-systemd-flag-285387 │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164379    │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p force-systemd-env-945733                                                                                                                                                                                                                   │ force-systemd-env-945733  │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p cert-options-533238 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ cert-options-533238 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ ssh     │ -p cert-options-533238 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p cert-options-533238                                                                                                                                                                                                                        │ cert-options-533238       │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-135652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │                     │
	│ stop    │ -p old-k8s-version-135652 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │ 17 Oct 25 20:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-135652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ image   │ old-k8s-version-135652 image list --format=json                                                                                                                                                                                               │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ pause   │ -p old-k8s-version-135652 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │                     │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164379    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711         │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:06 UTC │
	│ delete  │ -p cert-expiration-164379                                                                                                                                                                                                                     │ cert-expiration-164379    │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724        │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-413711         │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p no-preload-413711 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-413711         │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p no-preload-413711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-413711         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711         │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-572724        │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:07:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:07:04.522512  465315 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:07:04.522745  465315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:04.522770  465315 out.go:374] Setting ErrFile to fd 2...
	I1017 20:07:04.522787  465315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:04.523080  465315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:07:04.523519  465315 out.go:368] Setting JSON to false
	I1017 20:07:04.524551  465315 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10175,"bootTime":1760721449,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:07:04.524653  465315 start.go:141] virtualization:  
	I1017 20:07:04.529731  465315 out.go:179] * [no-preload-413711] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:07:04.532984  465315 notify.go:220] Checking for updates...
	I1017 20:07:04.536481  465315 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:07:04.539433  465315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:07:04.542349  465315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:04.545232  465315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:07:04.548428  465315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:07:04.551394  465315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:07:04.554906  465315 config.go:182] Loaded profile config "no-preload-413711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:04.555462  465315 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:07:04.582078  465315 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:07:04.582204  465315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:04.643515  465315 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:07:04.633763756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:04.643626  465315 docker.go:318] overlay module found
	I1017 20:07:04.646765  465315 out.go:179] * Using the docker driver based on existing profile
	I1017 20:07:04.649944  465315 start.go:305] selected driver: docker
	I1017 20:07:04.649964  465315 start.go:925] validating driver "docker" against &{Name:no-preload-413711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-413711 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:04.650074  465315 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:07:04.650792  465315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:04.709817  465315 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:07:04.700083192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:04.710161  465315 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:07:04.710206  465315 cni.go:84] Creating CNI manager for ""
	I1017 20:07:04.710266  465315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:07:04.710308  465315 start.go:349] cluster config:
	{Name:no-preload-413711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-413711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:04.715385  465315 out.go:179] * Starting "no-preload-413711" primary control-plane node in "no-preload-413711" cluster
	I1017 20:07:04.718327  465315 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:07:04.721177  465315 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:07:04.724063  465315 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:07:04.724139  465315 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:07:04.724212  465315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/config.json ...
	I1017 20:07:04.724498  465315 cache.go:107] acquiring lock: {Name:mk283109ed0890930f2e1227f11d2249b911e57a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:04.724711  465315 cache.go:115] /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1017 20:07:04.724727  465315 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 239.272µs
	I1017 20:07:04.724746  465315 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1017 20:07:04.724807  465315 cache.go:107] acquiring lock: {Name:mka84b5135a74ba5733d8385265d22074d9ab0e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:04.724856  465315 cache.go:115] /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1017 20:07:04.724845  465315 cache.go:107] acquiring lock: {Name:mk2b3df5904e0931bd8e73c82e0d0c48699a646d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:04.724878  465315 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 73.77µs
	I1017 20:07:04.724886  465315 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1017 20:07:04.724897  465315 cache.go:107] acquiring lock: {Name:mk1e1ced76b5940fb798bd2fb1787c6b86455f50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:04.724909  465315 cache.go:115] /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1017 20:07:04.724917  465315 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 85.437µs
	I1017 20:07:04.724924  465315 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1017 20:07:04.724929  465315 cache.go:115] /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1017 20:07:04.724935  465315 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.375µs
	I1017 20:07:04.724940  465315 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1017 20:07:04.724936  465315 cache.go:107] acquiring lock: {Name:mk2b501424b657c50c256e29411e9fad9da670ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:04.724949  465315 cache.go:107] acquiring lock: {Name:mk6893f7a821567425608f33c9e4dd4b61533b3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:04.724965  465315 cache.go:115] /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1017 20:07:04.724971  465315 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 36.552µs
	I1017 20:07:04.724977  465315 cache.go:115] /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1017 20:07:04.724983  465315 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 35.232µs
	I1017 20:07:04.724986  465315 cache.go:107] acquiring lock: {Name:mk197d76b3183f6cd1abe6ab14c60b265c8ea976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:04.724996  465315 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1017 20:07:04.724977  465315 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1017 20:07:04.725016  465315 cache.go:115] /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1017 20:07:04.725011  465315 cache.go:107] acquiring lock: {Name:mk63a65d26a43cccf847cbc76c19d28fabab4bbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:04.725023  465315 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 37.127µs
	I1017 20:07:04.725029  465315 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1017 20:07:04.725041  465315 cache.go:115] /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1017 20:07:04.725046  465315 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 36.758µs
	I1017 20:07:04.725068  465315 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1017 20:07:04.725075  465315 cache.go:87] Successfully saved all images to host disk.
	I1017 20:07:04.743686  465315 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:07:04.743705  465315 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:07:04.743717  465315 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:07:04.743739  465315 start.go:360] acquireMachinesLock for no-preload-413711: {Name:mkcf98612eed3eee149776f16a5059322d31b2a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:04.743788  465315 start.go:364] duration metric: took 33.993µs to acquireMachinesLock for "no-preload-413711"
	I1017 20:07:04.743806  465315 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:07:04.743811  465315 fix.go:54] fixHost starting: 
	I1017 20:07:04.744078  465315 cli_runner.go:164] Run: docker container inspect no-preload-413711 --format={{.State.Status}}
	I1017 20:07:04.760432  465315 fix.go:112] recreateIfNeeded on no-preload-413711: state=Stopped err=<nil>
	W1017 20:07:04.760458  465315 fix.go:138] unexpected machine state, will restart: <nil>
	W1017 20:07:03.164459  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	W1017 20:07:05.164502  461068 node_ready.go:57] node "embed-certs-572724" has "Ready":"False" status (will retry)
	I1017 20:07:06.676644  461068 node_ready.go:49] node "embed-certs-572724" is "Ready"
	I1017 20:07:06.676678  461068 node_ready.go:38] duration metric: took 40.514998782s for node "embed-certs-572724" to be "Ready" ...
	I1017 20:07:06.676694  461068 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:07:06.676769  461068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:07:06.705192  461068 api_server.go:72] duration metric: took 42.49427079s to wait for apiserver process to appear ...
	I1017 20:07:06.705223  461068 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:07:06.705288  461068 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1017 20:07:06.717080  461068 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1017 20:07:06.718177  461068 api_server.go:141] control plane version: v1.34.1
	I1017 20:07:06.718207  461068 api_server.go:131] duration metric: took 12.97657ms to wait for apiserver health ...
	I1017 20:07:06.718216  461068 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:07:06.724758  461068 system_pods.go:59] 8 kube-system pods found
	I1017 20:07:06.724841  461068 system_pods.go:61] "coredns-66bc5c9577-q9n55" [17c2ad15-d7b1-4089-8d58-7f9a984c1aa4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:07:06.724871  461068 system_pods.go:61] "etcd-embed-certs-572724" [66f610ec-bb5f-479e-abeb-d4372c4b89ee] Running
	I1017 20:07:06.724893  461068 system_pods.go:61] "kindnet-cg6w6" [b1442750-2145-4f2a-a45a-8b8506de6abf] Running
	I1017 20:07:06.724915  461068 system_pods.go:61] "kube-apiserver-embed-certs-572724" [d490fad2-ab3c-4b6b-ae8c-83ed67aedd66] Running
	I1017 20:07:06.724937  461068 system_pods.go:61] "kube-controller-manager-embed-certs-572724" [a1d1cfbf-5f1e-4979-92f8-6235a885ea11] Running
	I1017 20:07:06.724968  461068 system_pods.go:61] "kube-proxy-2jxkk" [89e3a128-22d2-42fa-8277-54ea446f0a18] Running
	I1017 20:07:06.724990  461068 system_pods.go:61] "kube-scheduler-embed-certs-572724" [15633822-38e1-468c-b6ea-a0f51d229ba0] Running
	I1017 20:07:06.725012  461068 system_pods.go:61] "storage-provisioner" [5c2944e0-d296-4a7e-98e8-dcbf69da9bc7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:07:06.725035  461068 system_pods.go:74] duration metric: took 6.812309ms to wait for pod list to return data ...
	I1017 20:07:06.725064  461068 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:07:06.729114  461068 default_sa.go:45] found service account: "default"
	I1017 20:07:06.729182  461068 default_sa.go:55] duration metric: took 4.084606ms for default service account to be created ...
	I1017 20:07:06.729208  461068 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:07:06.823736  461068 system_pods.go:86] 8 kube-system pods found
	I1017 20:07:06.823834  461068 system_pods.go:89] "coredns-66bc5c9577-q9n55" [17c2ad15-d7b1-4089-8d58-7f9a984c1aa4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:07:06.823864  461068 system_pods.go:89] "etcd-embed-certs-572724" [66f610ec-bb5f-479e-abeb-d4372c4b89ee] Running
	I1017 20:07:06.823890  461068 system_pods.go:89] "kindnet-cg6w6" [b1442750-2145-4f2a-a45a-8b8506de6abf] Running
	I1017 20:07:06.823912  461068 system_pods.go:89] "kube-apiserver-embed-certs-572724" [d490fad2-ab3c-4b6b-ae8c-83ed67aedd66] Running
	I1017 20:07:06.823934  461068 system_pods.go:89] "kube-controller-manager-embed-certs-572724" [a1d1cfbf-5f1e-4979-92f8-6235a885ea11] Running
	I1017 20:07:06.823963  461068 system_pods.go:89] "kube-proxy-2jxkk" [89e3a128-22d2-42fa-8277-54ea446f0a18] Running
	I1017 20:07:06.823983  461068 system_pods.go:89] "kube-scheduler-embed-certs-572724" [15633822-38e1-468c-b6ea-a0f51d229ba0] Running
	I1017 20:07:06.824005  461068 system_pods.go:89] "storage-provisioner" [5c2944e0-d296-4a7e-98e8-dcbf69da9bc7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:07:06.824055  461068 retry.go:31] will retry after 200.537482ms: missing components: kube-dns
	I1017 20:07:07.041379  461068 system_pods.go:86] 8 kube-system pods found
	I1017 20:07:07.041410  461068 system_pods.go:89] "coredns-66bc5c9577-q9n55" [17c2ad15-d7b1-4089-8d58-7f9a984c1aa4] Running
	I1017 20:07:07.041417  461068 system_pods.go:89] "etcd-embed-certs-572724" [66f610ec-bb5f-479e-abeb-d4372c4b89ee] Running
	I1017 20:07:07.041421  461068 system_pods.go:89] "kindnet-cg6w6" [b1442750-2145-4f2a-a45a-8b8506de6abf] Running
	I1017 20:07:07.041425  461068 system_pods.go:89] "kube-apiserver-embed-certs-572724" [d490fad2-ab3c-4b6b-ae8c-83ed67aedd66] Running
	I1017 20:07:07.041429  461068 system_pods.go:89] "kube-controller-manager-embed-certs-572724" [a1d1cfbf-5f1e-4979-92f8-6235a885ea11] Running
	I1017 20:07:07.041433  461068 system_pods.go:89] "kube-proxy-2jxkk" [89e3a128-22d2-42fa-8277-54ea446f0a18] Running
	I1017 20:07:07.041440  461068 system_pods.go:89] "kube-scheduler-embed-certs-572724" [15633822-38e1-468c-b6ea-a0f51d229ba0] Running
	I1017 20:07:07.041447  461068 system_pods.go:89] "storage-provisioner" [5c2944e0-d296-4a7e-98e8-dcbf69da9bc7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:07:07.041456  461068 system_pods.go:126] duration metric: took 312.221508ms to wait for k8s-apps to be running ...
	I1017 20:07:07.041508  461068 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:07:07.041583  461068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:07:07.054500  461068 system_svc.go:56] duration metric: took 12.983265ms WaitForService to wait for kubelet
	I1017 20:07:07.054582  461068 kubeadm.go:586] duration metric: took 42.843666461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:07:07.054618  461068 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:07:07.059611  461068 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:07:07.059647  461068 node_conditions.go:123] node cpu capacity is 2
	I1017 20:07:07.059661  461068 node_conditions.go:105] duration metric: took 5.014352ms to run NodePressure ...
	I1017 20:07:07.059674  461068 start.go:241] waiting for startup goroutines ...
	I1017 20:07:07.059704  461068 start.go:246] waiting for cluster config update ...
	I1017 20:07:07.059727  461068 start.go:255] writing updated cluster config ...
	I1017 20:07:07.060040  461068 ssh_runner.go:195] Run: rm -f paused
	I1017 20:07:07.063740  461068 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:07:07.070528  461068 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q9n55" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:07.075204  461068 pod_ready.go:94] pod "coredns-66bc5c9577-q9n55" is "Ready"
	I1017 20:07:07.075277  461068 pod_ready.go:86] duration metric: took 4.719443ms for pod "coredns-66bc5c9577-q9n55" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:07.077660  461068 pod_ready.go:83] waiting for pod "etcd-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:07.081902  461068 pod_ready.go:94] pod "etcd-embed-certs-572724" is "Ready"
	I1017 20:07:07.081971  461068 pod_ready.go:86] duration metric: took 4.246677ms for pod "etcd-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:07.091696  461068 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:07.096447  461068 pod_ready.go:94] pod "kube-apiserver-embed-certs-572724" is "Ready"
	I1017 20:07:07.096568  461068 pod_ready.go:86] duration metric: took 4.796421ms for pod "kube-apiserver-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:07.098743  461068 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:07.469196  461068 pod_ready.go:94] pod "kube-controller-manager-embed-certs-572724" is "Ready"
	I1017 20:07:07.469231  461068 pod_ready.go:86] duration metric: took 370.431515ms for pod "kube-controller-manager-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:07.669289  461068 pod_ready.go:83] waiting for pod "kube-proxy-2jxkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:08.068801  461068 pod_ready.go:94] pod "kube-proxy-2jxkk" is "Ready"
	I1017 20:07:08.068834  461068 pod_ready.go:86] duration metric: took 399.504133ms for pod "kube-proxy-2jxkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:08.270024  461068 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:08.669358  461068 pod_ready.go:94] pod "kube-scheduler-embed-certs-572724" is "Ready"
	I1017 20:07:08.669383  461068 pod_ready.go:86] duration metric: took 399.332101ms for pod "kube-scheduler-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:08.669394  461068 pod_ready.go:40] duration metric: took 1.605623649s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:07:08.741935  461068 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:07:08.745411  461068 out.go:179] * Done! kubectl is now configured to use "embed-certs-572724" cluster and "default" namespace by default
	I1017 20:07:04.763631  465315 out.go:252] * Restarting existing docker container for "no-preload-413711" ...
	I1017 20:07:04.763710  465315 cli_runner.go:164] Run: docker start no-preload-413711
	I1017 20:07:05.032093  465315 cli_runner.go:164] Run: docker container inspect no-preload-413711 --format={{.State.Status}}
	I1017 20:07:05.058041  465315 kic.go:430] container "no-preload-413711" state is running.
	I1017 20:07:05.058444  465315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-413711
	I1017 20:07:05.085209  465315 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/config.json ...
	I1017 20:07:05.085470  465315 machine.go:93] provisionDockerMachine start ...
	I1017 20:07:05.085550  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:05.111093  465315 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:05.113960  465315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1017 20:07:05.113985  465315 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:07:05.115423  465315 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:07:08.269219  465315 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-413711
	
	I1017 20:07:08.269248  465315 ubuntu.go:182] provisioning hostname "no-preload-413711"
	I1017 20:07:08.269326  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:08.290192  465315 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:08.290536  465315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1017 20:07:08.290552  465315 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-413711 && echo "no-preload-413711" | sudo tee /etc/hostname
	I1017 20:07:08.451037  465315 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-413711
	
	I1017 20:07:08.451120  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:08.471822  465315 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:08.472136  465315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1017 20:07:08.472159  465315 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-413711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-413711/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-413711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:07:08.629106  465315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:07:08.629196  465315 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:07:08.629250  465315 ubuntu.go:190] setting up certificates
	I1017 20:07:08.629278  465315 provision.go:84] configureAuth start
	I1017 20:07:08.629359  465315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-413711
	I1017 20:07:08.646141  465315 provision.go:143] copyHostCerts
	I1017 20:07:08.646229  465315 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:07:08.646254  465315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:07:08.646336  465315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:07:08.646477  465315 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:07:08.646491  465315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:07:08.646524  465315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:07:08.646589  465315 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:07:08.646597  465315 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:07:08.646630  465315 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:07:08.646686  465315 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.no-preload-413711 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-413711]
	I1017 20:07:08.935399  465315 provision.go:177] copyRemoteCerts
	I1017 20:07:08.935508  465315 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:07:08.935593  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:08.974269  465315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:07:09.096637  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:07:09.116206  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:07:09.136050  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:07:09.155363  465315 provision.go:87] duration metric: took 526.056305ms to configureAuth
	I1017 20:07:09.155389  465315 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:07:09.155582  465315 config.go:182] Loaded profile config "no-preload-413711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:09.155692  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:09.173102  465315 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:09.173464  465315 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I1017 20:07:09.173486  465315 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:07:09.536223  465315 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:07:09.536266  465315 machine.go:96] duration metric: took 4.450763538s to provisionDockerMachine
	I1017 20:07:09.536277  465315 start.go:293] postStartSetup for "no-preload-413711" (driver="docker")
	I1017 20:07:09.536288  465315 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:07:09.536358  465315 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:07:09.536402  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:09.560812  465315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:07:09.664570  465315 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:07:09.667774  465315 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:07:09.667800  465315 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:07:09.667810  465315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:07:09.667871  465315 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:07:09.667953  465315 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:07:09.668053  465315 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:07:09.675795  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:07:09.693884  465315 start.go:296] duration metric: took 157.591406ms for postStartSetup
	I1017 20:07:09.693961  465315 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:07:09.694010  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:09.712487  465315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:07:09.813817  465315 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:07:09.818458  465315 fix.go:56] duration metric: took 5.07464104s for fixHost
	I1017 20:07:09.818485  465315 start.go:83] releasing machines lock for "no-preload-413711", held for 5.074689236s
	I1017 20:07:09.818573  465315 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-413711
	I1017 20:07:09.835990  465315 ssh_runner.go:195] Run: cat /version.json
	I1017 20:07:09.836057  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:09.836161  465315 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:07:09.836217  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:09.858914  465315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:07:09.874189  465315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:07:10.093538  465315 ssh_runner.go:195] Run: systemctl --version
	I1017 20:07:10.100827  465315 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:07:10.144837  465315 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:07:10.149988  465315 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:07:10.150084  465315 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:07:10.159284  465315 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:07:10.159365  465315 start.go:495] detecting cgroup driver to use...
	I1017 20:07:10.159415  465315 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:07:10.159471  465315 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:07:10.175661  465315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:07:10.189848  465315 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:07:10.190027  465315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:07:10.206404  465315 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:07:10.221704  465315 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:07:10.334631  465315 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:07:10.467552  465315 docker.go:234] disabling docker service ...
	I1017 20:07:10.467641  465315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:07:10.483186  465315 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:07:10.500230  465315 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:07:10.615793  465315 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:07:10.740294  465315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:07:10.754046  465315 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:07:10.769158  465315 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:07:10.769274  465315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:10.779344  465315 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:07:10.779454  465315 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:10.790178  465315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:10.800620  465315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:10.810127  465315 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:07:10.818374  465315 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:10.827091  465315 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:10.835752  465315 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:10.845569  465315 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:07:10.853175  465315 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:07:10.860374  465315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:07:10.980871  465315 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:07:11.167268  465315 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:07:11.167338  465315 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:07:11.177256  465315 start.go:563] Will wait 60s for crictl version
	I1017 20:07:11.177316  465315 ssh_runner.go:195] Run: which crictl
	I1017 20:07:11.182164  465315 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:07:11.211586  465315 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:07:11.211677  465315 ssh_runner.go:195] Run: crio --version
	I1017 20:07:11.248279  465315 ssh_runner.go:195] Run: crio --version
	I1017 20:07:11.284792  465315 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:07:11.287937  465315 cli_runner.go:164] Run: docker network inspect no-preload-413711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:07:11.306228  465315 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 20:07:11.310509  465315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:07:11.321095  465315 kubeadm.go:883] updating cluster {Name:no-preload-413711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-413711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:07:11.321202  465315 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:07:11.321243  465315 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:07:11.358654  465315 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:07:11.358679  465315 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:07:11.358688  465315 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 20:07:11.358779  465315 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-413711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-413711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:07:11.358866  465315 ssh_runner.go:195] Run: crio config
	I1017 20:07:11.429389  465315 cni.go:84] Creating CNI manager for ""
	I1017 20:07:11.429466  465315 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:07:11.429539  465315 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:07:11.429644  465315 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-413711 NodeName:no-preload-413711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:07:11.429839  465315 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-413711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:07:11.429954  465315 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:07:11.442078  465315 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:07:11.442147  465315 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:07:11.451048  465315 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 20:07:11.465586  465315 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:07:11.486154  465315 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1017 20:07:11.506232  465315 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:07:11.510422  465315 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:07:11.520368  465315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:07:11.625131  465315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:07:11.643311  465315 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711 for IP: 192.168.76.2
	I1017 20:07:11.643343  465315 certs.go:195] generating shared ca certs ...
	I1017 20:07:11.643358  465315 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:11.643501  465315 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:07:11.643566  465315 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:07:11.643583  465315 certs.go:257] generating profile certs ...
	I1017 20:07:11.643669  465315 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.key
	I1017 20:07:11.643740  465315 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.key.420d8401
	I1017 20:07:11.643792  465315 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.key
	I1017 20:07:11.643909  465315 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:07:11.643944  465315 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:07:11.643956  465315 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:07:11.643979  465315 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:07:11.644005  465315 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:07:11.644035  465315 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:07:11.644081  465315 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:07:11.644754  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:07:11.666695  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:07:11.687015  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:07:11.708034  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:07:11.730813  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 20:07:11.754962  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:07:11.783136  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:07:11.815152  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:07:11.835846  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:07:11.865552  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:07:11.887428  465315 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:07:11.907409  465315 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:07:11.920719  465315 ssh_runner.go:195] Run: openssl version
	I1017 20:07:11.932762  465315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:07:11.942034  465315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:07:11.945880  465315 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:07:11.945982  465315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:07:11.987162  465315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:07:11.995311  465315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:07:12.013240  465315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:07:12.021830  465315 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:07:12.021995  465315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:07:12.067964  465315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:07:12.076674  465315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:07:12.085994  465315 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:07:12.090087  465315 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:07:12.090158  465315 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:07:12.136152  465315 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:07:12.144070  465315 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:07:12.147859  465315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:07:12.190342  465315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:07:12.231669  465315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:07:12.280750  465315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:07:12.343271  465315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:07:12.443112  465315 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:07:12.519882  465315 kubeadm.go:400] StartCluster: {Name:no-preload-413711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-413711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:12.520032  465315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:07:12.520139  465315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:07:12.556802  465315 cri.go:89] found id: "deaac6f262625b4a8323f78d4de40fa760609f9d1fb3c2272664be7f075fd5a4"
	I1017 20:07:12.556863  465315 cri.go:89] found id: "d3cbad8ffb59387c5fb4641f605385ffcb3d1293c2dbeb606812de21a7dbfcbe"
	I1017 20:07:12.556892  465315 cri.go:89] found id: "c38dce9b2ac325e84a1349d8c32881acb0b877b98f49fe5fd6e22a8ed8a5df1b"
	I1017 20:07:12.556911  465315 cri.go:89] found id: "36109bb4bd5f615a7a96ed9755d97a57c974349fd49cb42b98be4765efc30f76"
	I1017 20:07:12.556947  465315 cri.go:89] found id: ""
	I1017 20:07:12.557038  465315 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:07:12.577567  465315 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:07:12Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:07:12.577721  465315 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:07:12.589650  465315 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:07:12.589721  465315 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:07:12.589814  465315 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:07:12.602887  465315 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:07:12.603877  465315 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-413711" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:12.604573  465315 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-413711" cluster setting kubeconfig missing "no-preload-413711" context setting]
	I1017 20:07:12.605477  465315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:12.607436  465315 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:07:12.618570  465315 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 20:07:12.618605  465315 kubeadm.go:601] duration metric: took 28.863385ms to restartPrimaryControlPlane
	I1017 20:07:12.618615  465315 kubeadm.go:402] duration metric: took 98.743724ms to StartCluster
	I1017 20:07:12.618629  465315 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:12.618690  465315 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:12.620149  465315 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:12.620389  465315 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:07:12.620804  465315 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:07:12.620891  465315 addons.go:69] Setting storage-provisioner=true in profile "no-preload-413711"
	I1017 20:07:12.620899  465315 config.go:182] Loaded profile config "no-preload-413711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:12.620911  465315 addons.go:69] Setting dashboard=true in profile "no-preload-413711"
	I1017 20:07:12.620918  465315 addons.go:238] Setting addon dashboard=true in "no-preload-413711"
	W1017 20:07:12.620924  465315 addons.go:247] addon dashboard should already be in state true
	I1017 20:07:12.620905  465315 addons.go:238] Setting addon storage-provisioner=true in "no-preload-413711"
	I1017 20:07:12.620954  465315 addons.go:69] Setting default-storageclass=true in profile "no-preload-413711"
	W1017 20:07:12.620956  465315 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:07:12.620964  465315 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-413711"
	I1017 20:07:12.620976  465315 host.go:66] Checking if "no-preload-413711" exists ...
	I1017 20:07:12.621295  465315 cli_runner.go:164] Run: docker container inspect no-preload-413711 --format={{.State.Status}}
	I1017 20:07:12.621410  465315 cli_runner.go:164] Run: docker container inspect no-preload-413711 --format={{.State.Status}}
	I1017 20:07:12.620948  465315 host.go:66] Checking if "no-preload-413711" exists ...
	I1017 20:07:12.624506  465315 out.go:179] * Verifying Kubernetes components...
	I1017 20:07:12.625487  465315 cli_runner.go:164] Run: docker container inspect no-preload-413711 --format={{.State.Status}}
	I1017 20:07:12.632641  465315 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:07:12.672755  465315 addons.go:238] Setting addon default-storageclass=true in "no-preload-413711"
	W1017 20:07:12.672780  465315 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:07:12.672806  465315 host.go:66] Checking if "no-preload-413711" exists ...
	I1017 20:07:12.673255  465315 cli_runner.go:164] Run: docker container inspect no-preload-413711 --format={{.State.Status}}
	I1017 20:07:12.691285  465315 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:07:12.691395  465315 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 20:07:12.694062  465315 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:07:12.694082  465315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:07:12.694142  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:12.697154  465315 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 20:07:12.704849  465315 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 20:07:12.704873  465315 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 20:07:12.704950  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:12.712620  465315 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:07:12.712643  465315 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:07:12.712709  465315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:12.752187  465315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:07:12.766573  465315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:07:12.770110  465315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:07:12.993931  465315 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:07:13.016983  465315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:07:13.067489  465315 node_ready.go:35] waiting up to 6m0s for node "no-preload-413711" to be "Ready" ...
	I1017 20:07:13.087184  465315 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 20:07:13.087251  465315 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 20:07:13.094432  465315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:07:13.143647  465315 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 20:07:13.143720  465315 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 20:07:13.197674  465315 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 20:07:13.197751  465315 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 20:07:13.258300  465315 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 20:07:13.258371  465315 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 20:07:13.329638  465315 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 20:07:13.329709  465315 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 20:07:13.349058  465315 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 20:07:13.349133  465315 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 20:07:13.376249  465315 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 20:07:13.376322  465315 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 20:07:13.406025  465315 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 20:07:13.406100  465315 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 20:07:13.427646  465315 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:07:13.427721  465315 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 20:07:13.445470  465315 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:07:17.874161  465315 node_ready.go:49] node "no-preload-413711" is "Ready"
	I1017 20:07:17.874198  465315 node_ready.go:38] duration metric: took 4.806637803s for node "no-preload-413711" to be "Ready" ...
	I1017 20:07:17.874212  465315 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:07:17.874281  465315 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:07:18.989882  465315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.972813899s)
	I1017 20:07:18.989940  465315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.895439605s)
	I1017 20:07:19.074301  465315 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.628730245s)
	I1017 20:07:19.074557  465315 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.200261056s)
	I1017 20:07:19.074577  465315 api_server.go:72] duration metric: took 6.454155204s to wait for apiserver process to appear ...
	I1017 20:07:19.074592  465315 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:07:19.074608  465315 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 20:07:19.077707  465315 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-413711 addons enable metrics-server
	
	I1017 20:07:19.080564  465315 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1017 20:07:19.083608  465315 addons.go:514] duration metric: took 6.462804532s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1017 20:07:19.087358  465315 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 20:07:19.088351  465315 api_server.go:141] control plane version: v1.34.1
	I1017 20:07:19.088377  465315 api_server.go:131] duration metric: took 13.778426ms to wait for apiserver health ...
	I1017 20:07:19.088387  465315 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:07:19.091730  465315 system_pods.go:59] 8 kube-system pods found
	I1017 20:07:19.091764  465315 system_pods.go:61] "coredns-66bc5c9577-4bslb" [be4a3950-c683-4860-a96b-c48c9db546ea] Running
	I1017 20:07:19.091771  465315 system_pods.go:61] "etcd-no-preload-413711" [68a6798f-bea7-4d3f-b842-c2bbcc9fd338] Running
	I1017 20:07:19.091776  465315 system_pods.go:61] "kindnet-7jkvq" [a848c0df-632d-4733-9f76-1ed315cae3be] Running
	I1017 20:07:19.091781  465315 system_pods.go:61] "kube-apiserver-no-preload-413711" [2e789da4-e54f-4641-9fe0-0c9b84c006ac] Running
	I1017 20:07:19.091794  465315 system_pods.go:61] "kube-controller-manager-no-preload-413711" [1edb18bd-3e00-4c28-be3c-1e15ec28992a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:07:19.091807  465315 system_pods.go:61] "kube-proxy-kl48k" [30ab540f-a82e-479b-956b-1b7596cf1561] Running
	I1017 20:07:19.091815  465315 system_pods.go:61] "kube-scheduler-no-preload-413711" [61f47adf-393d-4916-a1e0-326db562bb59] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:07:19.091820  465315 system_pods.go:61] "storage-provisioner" [9b892a85-762b-434f-a48c-1ff1266c2b06] Running
	I1017 20:07:19.091826  465315 system_pods.go:74] duration metric: took 3.434137ms to wait for pod list to return data ...
	I1017 20:07:19.091839  465315 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:07:19.094342  465315 default_sa.go:45] found service account: "default"
	I1017 20:07:19.094365  465315 default_sa.go:55] duration metric: took 2.520562ms for default service account to be created ...
	I1017 20:07:19.094373  465315 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:07:19.098630  465315 system_pods.go:86] 8 kube-system pods found
	I1017 20:07:19.098660  465315 system_pods.go:89] "coredns-66bc5c9577-4bslb" [be4a3950-c683-4860-a96b-c48c9db546ea] Running
	I1017 20:07:19.098667  465315 system_pods.go:89] "etcd-no-preload-413711" [68a6798f-bea7-4d3f-b842-c2bbcc9fd338] Running
	I1017 20:07:19.098677  465315 system_pods.go:89] "kindnet-7jkvq" [a848c0df-632d-4733-9f76-1ed315cae3be] Running
	I1017 20:07:19.098681  465315 system_pods.go:89] "kube-apiserver-no-preload-413711" [2e789da4-e54f-4641-9fe0-0c9b84c006ac] Running
	I1017 20:07:19.098689  465315 system_pods.go:89] "kube-controller-manager-no-preload-413711" [1edb18bd-3e00-4c28-be3c-1e15ec28992a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:07:19.098697  465315 system_pods.go:89] "kube-proxy-kl48k" [30ab540f-a82e-479b-956b-1b7596cf1561] Running
	I1017 20:07:19.098707  465315 system_pods.go:89] "kube-scheduler-no-preload-413711" [61f47adf-393d-4916-a1e0-326db562bb59] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:07:19.098711  465315 system_pods.go:89] "storage-provisioner" [9b892a85-762b-434f-a48c-1ff1266c2b06] Running
	I1017 20:07:19.098724  465315 system_pods.go:126] duration metric: took 4.345907ms to wait for k8s-apps to be running ...
	I1017 20:07:19.098732  465315 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:07:19.099062  465315 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:07:19.114676  465315 system_svc.go:56] duration metric: took 15.9304ms WaitForService to wait for kubelet
	I1017 20:07:19.114703  465315 kubeadm.go:586] duration metric: took 6.494280162s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:07:19.114722  465315 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:07:19.117736  465315 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:07:19.117761  465315 node_conditions.go:123] node cpu capacity is 2
	I1017 20:07:19.117772  465315 node_conditions.go:105] duration metric: took 3.04474ms to run NodePressure ...
	I1017 20:07:19.117783  465315 start.go:241] waiting for startup goroutines ...
	I1017 20:07:19.117792  465315 start.go:246] waiting for cluster config update ...
	I1017 20:07:19.117802  465315 start.go:255] writing updated cluster config ...
	I1017 20:07:19.118065  465315 ssh_runner.go:195] Run: rm -f paused
	I1017 20:07:19.121804  465315 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:07:19.126330  465315 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4bslb" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:19.136254  465315 pod_ready.go:94] pod "coredns-66bc5c9577-4bslb" is "Ready"
	I1017 20:07:19.136278  465315 pod_ready.go:86] duration metric: took 9.924969ms for pod "coredns-66bc5c9577-4bslb" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:19.138977  465315 pod_ready.go:83] waiting for pod "etcd-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:19.144425  465315 pod_ready.go:94] pod "etcd-no-preload-413711" is "Ready"
	I1017 20:07:19.144452  465315 pod_ready.go:86] duration metric: took 5.45539ms for pod "etcd-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:19.153263  465315 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:19.159141  465315 pod_ready.go:94] pod "kube-apiserver-no-preload-413711" is "Ready"
	I1017 20:07:19.159166  465315 pod_ready.go:86] duration metric: took 5.875752ms for pod "kube-apiserver-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:19.162153  465315 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-413711" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Oct 17 20:07:06 embed-certs-572724 crio[841]: time="2025-10-17T20:07:06.719412993Z" level=info msg="Created container bf73ad5f31acc6bb0edf962badae0cb69ff8ed633f347a8ec9963199318b3ebf: kube-system/coredns-66bc5c9577-q9n55/coredns" id=e95a3088-a3df-493d-9dab-ffc27a90cec0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:06 embed-certs-572724 crio[841]: time="2025-10-17T20:07:06.720298048Z" level=info msg="Starting container: bf73ad5f31acc6bb0edf962badae0cb69ff8ed633f347a8ec9963199318b3ebf" id=328b2505-71a2-4e50-925f-1bc4db7f660c name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:07:06 embed-certs-572724 crio[841]: time="2025-10-17T20:07:06.733688596Z" level=info msg="Started container" PID=1695 containerID=bf73ad5f31acc6bb0edf962badae0cb69ff8ed633f347a8ec9963199318b3ebf description=kube-system/coredns-66bc5c9577-q9n55/coredns id=328b2505-71a2-4e50-925f-1bc4db7f660c name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd1180c3c20deec6a241faba9c6d60e5268e44d7ea39ed11165c79e5cfd3b825
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.311462922Z" level=info msg="Running pod sandbox: default/busybox/POD" id=be1df833-c73c-40f2-97bf-1c122c855a02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.311537373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.322541255Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:49ff00c27b6d22a005a448dafc7ec3cefd4e953690ed7587911b97c170f34d83 UID:5f8cf53e-8a62-4677-8c9e-ec9aee8c1cbd NetNS:/var/run/netns/f6f0eee9-d1f1-4aa8-bf47-c08e4d19b1e7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049e9a0}] Aliases:map[]}"
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.322596244Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.33742155Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:49ff00c27b6d22a005a448dafc7ec3cefd4e953690ed7587911b97c170f34d83 UID:5f8cf53e-8a62-4677-8c9e-ec9aee8c1cbd NetNS:/var/run/netns/f6f0eee9-d1f1-4aa8-bf47-c08e4d19b1e7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049e9a0}] Aliases:map[]}"
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.337577723Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.344405809Z" level=info msg="Ran pod sandbox 49ff00c27b6d22a005a448dafc7ec3cefd4e953690ed7587911b97c170f34d83 with infra container: default/busybox/POD" id=be1df833-c73c-40f2-97bf-1c122c855a02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.345548924Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6cddf762-d05f-4fec-ac12-1de90440c2e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.345778744Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6cddf762-d05f-4fec-ac12-1de90440c2e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.345888952Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6cddf762-d05f-4fec-ac12-1de90440c2e2 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.350332612Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a2554070-b662-4749-a9e0-d595356c1985 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:07:09 embed-certs-572724 crio[841]: time="2025-10-17T20:07:09.354081776Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 20:07:11 embed-certs-572724 crio[841]: time="2025-10-17T20:07:11.430256718Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a2554070-b662-4749-a9e0-d595356c1985 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:07:11 embed-certs-572724 crio[841]: time="2025-10-17T20:07:11.432253881Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=019727f4-54b4-490c-9316-c8221d9f8499 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:07:11 embed-certs-572724 crio[841]: time="2025-10-17T20:07:11.436242423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f87c74e9-a402-4dd4-92e7-080db7ad65e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:07:11 embed-certs-572724 crio[841]: time="2025-10-17T20:07:11.443610073Z" level=info msg="Creating container: default/busybox/busybox" id=876f3afc-a4a5-49b2-b017-4ac150bad76d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:11 embed-certs-572724 crio[841]: time="2025-10-17T20:07:11.444596737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:11 embed-certs-572724 crio[841]: time="2025-10-17T20:07:11.455871347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:11 embed-certs-572724 crio[841]: time="2025-10-17T20:07:11.456638834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:11 embed-certs-572724 crio[841]: time="2025-10-17T20:07:11.476840845Z" level=info msg="Created container c55fe6e749217a537f30c0dddc8fe37186ac86f2901991a355ad8f1fadc93bb4: default/busybox/busybox" id=876f3afc-a4a5-49b2-b017-4ac150bad76d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:11 embed-certs-572724 crio[841]: time="2025-10-17T20:07:11.481520035Z" level=info msg="Starting container: c55fe6e749217a537f30c0dddc8fe37186ac86f2901991a355ad8f1fadc93bb4" id=afb7ddbe-57b2-40e8-8ba1-3c42b93e7c85 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:07:11 embed-certs-572724 crio[841]: time="2025-10-17T20:07:11.484190731Z" level=info msg="Started container" PID=1748 containerID=c55fe6e749217a537f30c0dddc8fe37186ac86f2901991a355ad8f1fadc93bb4 description=default/busybox/busybox id=afb7ddbe-57b2-40e8-8ba1-3c42b93e7c85 name=/runtime.v1.RuntimeService/StartContainer sandboxID=49ff00c27b6d22a005a448dafc7ec3cefd4e953690ed7587911b97c170f34d83
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	c55fe6e749217       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   49ff00c27b6d2       busybox                                      default
	bf73ad5f31acc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   bd1180c3c20de       coredns-66bc5c9577-q9n55                     kube-system
	409ea7dbd3f32       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   5188124083a4a       storage-provisioner                          kube-system
	8d91ee301e6dc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   6158688749a29       kindnet-cg6w6                                kube-system
	c64d39d0deba4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   f4916509ff3c2       kube-proxy-2jxkk                             kube-system
	258a9ea5cc3c6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   c9a7121e6e835       kube-apiserver-embed-certs-572724            kube-system
	24db83545011d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   a7f31f0532697       kube-scheduler-embed-certs-572724            kube-system
	ef115d4157c1c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   f7187acdb5fb3       kube-controller-manager-embed-certs-572724   kube-system
	00ca3f5545d1a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   49ddbd3d88180       etcd-embed-certs-572724                      kube-system
	
	
	==> coredns [bf73ad5f31acc6bb0edf962badae0cb69ff8ed633f347a8ec9963199318b3ebf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58182 - 38860 "HINFO IN 860626909685510270.3735780924689400925. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022784299s
	
	
	==> describe nodes <==
	Name:               embed-certs-572724
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-572724
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=embed-certs-572724
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_06_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:06:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-572724
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:07:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:07:21 +0000   Fri, 17 Oct 2025 20:06:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:07:21 +0000   Fri, 17 Oct 2025 20:06:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:07:21 +0000   Fri, 17 Oct 2025 20:06:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:07:21 +0000   Fri, 17 Oct 2025 20:07:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-572724
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                20557b6e-804a-45ff-a381-36f74b0f1294
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-q9n55                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-572724                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-cg6w6                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-572724             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-572724    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-2jxkk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-572724             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node embed-certs-572724 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node embed-certs-572724 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x8 over 74s)  kubelet          Node embed-certs-572724 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node embed-certs-572724 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node embed-certs-572724 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node embed-certs-572724 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node embed-certs-572724 event: Registered Node embed-certs-572724 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-572724 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct17 19:42] overlayfs: idmapped layers are currently not supported
	[Oct17 19:43] overlayfs: idmapped layers are currently not supported
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [00ca3f5545d1a84acde7a64b93f8cf12b036f4ceb284e737e0d77e513a0901e1] <==
	{"level":"warn","ts":"2025-10-17T20:06:15.147519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.191031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.203514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.266272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.284494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.303666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.334474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.365609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.393528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.419185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.456337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.486282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.521706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.553427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.555112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.579960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.605365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.629461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.657153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.685224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.706444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.746958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.785072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.818908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:15.950352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52850","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:07:21 up  2:49,  0 user,  load average: 6.40, 4.28, 3.12
	Linux embed-certs-572724 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d91ee301e6dcc6ae068ae206fe87220644d4ec7debcdef7e60482966d48165d] <==
	I1017 20:06:25.920445       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:06:25.922355       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 20:06:25.922483       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:06:25.922496       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:06:25.922506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:06:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:06:26.113771       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:06:26.113801       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:06:26.113810       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:06:26.114087       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:06:56.110346       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:06:56.114605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 20:06:56.114605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 20:06:56.114698       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1017 20:06:57.414544       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:06:57.414624       1 metrics.go:72] Registering metrics
	I1017 20:06:57.414976       1 controller.go:711] "Syncing nftables rules"
	I1017 20:07:06.116786       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:07:06.116844       1 main.go:301] handling current node
	I1017 20:07:16.112593       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:07:16.112687       1 main.go:301] handling current node
	
	
	==> kube-apiserver [258a9ea5cc3c60a85f87637461c7b7171bd7c24797dfff5c6bfa96a8ef5cd902] <==
	I1017 20:06:17.090461       1 policy_source.go:240] refreshing policies
	I1017 20:06:17.091655       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:06:17.214716       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:06:17.235261       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 20:06:17.262780       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:06:17.262860       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:06:17.310832       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:06:17.687247       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:06:17.695124       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:06:17.698134       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:06:18.676677       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:06:18.738910       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:06:18.804768       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:06:18.813146       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1017 20:06:18.814387       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:06:18.819270       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:06:18.884484       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:06:19.732288       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:06:19.756126       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:06:19.769440       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:06:24.663541       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:06:24.687790       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:06:24.856230       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 20:06:25.152622       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1017 20:07:19.182484       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:37712: use of closed network connection
	
	
	==> kube-controller-manager [ef115d4157c1c76d42f1eddeb334f07f0a6687130e4a8cec63a4549ce5c10021] <==
	I1017 20:06:23.953068       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:06:23.953082       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:06:23.953090       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:06:23.943889       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:06:23.953559       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 20:06:23.974858       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:06:23.974946       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:06:23.974977       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:06:23.978134       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:06:23.979959       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:06:23.980269       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:06:23.983002       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:06:23.983136       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:06:23.983007       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:06:23.992154       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:06:23.992277       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:06:23.992327       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:06:23.992364       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:06:23.992403       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:06:23.992890       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:06:23.993252       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:06:23.995878       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:06:23.995980       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:06:24.093942       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-572724" podCIDRs=["10.244.0.0/24"]
	I1017 20:07:08.933509       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c64d39d0deba4e42b1277c0d535df73e8d4e9eb23e16b1a8622db7a09f077ae5] <==
	I1017 20:06:26.084557       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:06:26.306808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:06:26.416817       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:06:26.416860       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 20:06:26.416942       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:06:26.450888       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:06:26.450945       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:06:26.455178       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:06:26.455555       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:06:26.455579       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:06:26.456754       1 config.go:200] "Starting service config controller"
	I1017 20:06:26.456777       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:06:26.460574       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:06:26.460631       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:06:26.460688       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:06:26.460715       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:06:26.461182       1 config.go:309] "Starting node config controller"
	I1017 20:06:26.461234       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:06:26.461263       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:06:26.557755       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:06:26.561017       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:06:26.561034       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [24db83545011d4ec1bb3ff98ee2bbe9e6392db85d568e9a0eed9e7f5ebc1537b] <==
	E1017 20:06:17.048177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:06:17.068622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:06:17.068778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:06:17.068873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:06:17.068996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:06:17.069095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:06:17.069220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:06:17.069646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:06:17.070621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:06:17.071308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:06:17.087299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 20:06:17.887200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:06:17.923269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 20:06:17.924037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:06:17.966047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:06:18.021031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:06:18.025016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:06:18.029603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:06:18.041848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:06:18.049554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:06:18.053421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:06:18.068883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:06:18.130339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:06:18.196813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1017 20:06:19.905157       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:06:21 embed-certs-572724 kubelet[1279]: I1017 20:06:21.048505    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-572724" podStartSLOduration=2.048484571 podStartE2EDuration="2.048484571s" podCreationTimestamp="2025-10-17 20:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:06:21.01308759 +0000 UTC m=+1.520630415" watchObservedRunningTime="2025-10-17 20:06:21.048484571 +0000 UTC m=+1.556027396"
	Oct 17 20:06:21 embed-certs-572724 kubelet[1279]: I1017 20:06:21.086324    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-572724" podStartSLOduration=2.086293777 podStartE2EDuration="2.086293777s" podCreationTimestamp="2025-10-17 20:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:06:21.051106349 +0000 UTC m=+1.558649182" watchObservedRunningTime="2025-10-17 20:06:21.086293777 +0000 UTC m=+1.593836610"
	Oct 17 20:06:24 embed-certs-572724 kubelet[1279]: I1017 20:06:24.124843    1279 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 20:06:24 embed-certs-572724 kubelet[1279]: I1017 20:06:24.125452    1279 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 20:06:25 embed-certs-572724 kubelet[1279]: I1017 20:06:25.093705    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1442750-2145-4f2a-a45a-8b8506de6abf-lib-modules\") pod \"kindnet-cg6w6\" (UID: \"b1442750-2145-4f2a-a45a-8b8506de6abf\") " pod="kube-system/kindnet-cg6w6"
	Oct 17 20:06:25 embed-certs-572724 kubelet[1279]: I1017 20:06:25.093757    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2f9g\" (UniqueName: \"kubernetes.io/projected/b1442750-2145-4f2a-a45a-8b8506de6abf-kube-api-access-v2f9g\") pod \"kindnet-cg6w6\" (UID: \"b1442750-2145-4f2a-a45a-8b8506de6abf\") " pod="kube-system/kindnet-cg6w6"
	Oct 17 20:06:25 embed-certs-572724 kubelet[1279]: I1017 20:06:25.093783    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89e3a128-22d2-42fa-8277-54ea446f0a18-kube-proxy\") pod \"kube-proxy-2jxkk\" (UID: \"89e3a128-22d2-42fa-8277-54ea446f0a18\") " pod="kube-system/kube-proxy-2jxkk"
	Oct 17 20:06:25 embed-certs-572724 kubelet[1279]: I1017 20:06:25.093801    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4wf\" (UniqueName: \"kubernetes.io/projected/89e3a128-22d2-42fa-8277-54ea446f0a18-kube-api-access-2g4wf\") pod \"kube-proxy-2jxkk\" (UID: \"89e3a128-22d2-42fa-8277-54ea446f0a18\") " pod="kube-system/kube-proxy-2jxkk"
	Oct 17 20:06:25 embed-certs-572724 kubelet[1279]: I1017 20:06:25.093820    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89e3a128-22d2-42fa-8277-54ea446f0a18-xtables-lock\") pod \"kube-proxy-2jxkk\" (UID: \"89e3a128-22d2-42fa-8277-54ea446f0a18\") " pod="kube-system/kube-proxy-2jxkk"
	Oct 17 20:06:25 embed-certs-572724 kubelet[1279]: I1017 20:06:25.093837    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b1442750-2145-4f2a-a45a-8b8506de6abf-cni-cfg\") pod \"kindnet-cg6w6\" (UID: \"b1442750-2145-4f2a-a45a-8b8506de6abf\") " pod="kube-system/kindnet-cg6w6"
	Oct 17 20:06:25 embed-certs-572724 kubelet[1279]: I1017 20:06:25.093852    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1442750-2145-4f2a-a45a-8b8506de6abf-xtables-lock\") pod \"kindnet-cg6w6\" (UID: \"b1442750-2145-4f2a-a45a-8b8506de6abf\") " pod="kube-system/kindnet-cg6w6"
	Oct 17 20:06:25 embed-certs-572724 kubelet[1279]: I1017 20:06:25.093872    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89e3a128-22d2-42fa-8277-54ea446f0a18-lib-modules\") pod \"kube-proxy-2jxkk\" (UID: \"89e3a128-22d2-42fa-8277-54ea446f0a18\") " pod="kube-system/kube-proxy-2jxkk"
	Oct 17 20:06:25 embed-certs-572724 kubelet[1279]: I1017 20:06:25.345099    1279 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:06:26 embed-certs-572724 kubelet[1279]: I1017 20:06:26.909776    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2jxkk" podStartSLOduration=2.909758602 podStartE2EDuration="2.909758602s" podCreationTimestamp="2025-10-17 20:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:06:26.908363958 +0000 UTC m=+7.415906783" watchObservedRunningTime="2025-10-17 20:06:26.909758602 +0000 UTC m=+7.417301419"
	Oct 17 20:06:26 embed-certs-572724 kubelet[1279]: I1017 20:06:26.909902    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cg6w6" podStartSLOduration=2.909896125 podStartE2EDuration="2.909896125s" podCreationTimestamp="2025-10-17 20:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:06:25.912861289 +0000 UTC m=+6.420404114" watchObservedRunningTime="2025-10-17 20:06:26.909896125 +0000 UTC m=+7.417438942"
	Oct 17 20:07:06 embed-certs-572724 kubelet[1279]: I1017 20:07:06.240294    1279 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 20:07:06 embed-certs-572724 kubelet[1279]: I1017 20:07:06.417994    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvkpf\" (UniqueName: \"kubernetes.io/projected/5c2944e0-d296-4a7e-98e8-dcbf69da9bc7-kube-api-access-fvkpf\") pod \"storage-provisioner\" (UID: \"5c2944e0-d296-4a7e-98e8-dcbf69da9bc7\") " pod="kube-system/storage-provisioner"
	Oct 17 20:07:06 embed-certs-572724 kubelet[1279]: I1017 20:07:06.418221    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c2944e0-d296-4a7e-98e8-dcbf69da9bc7-tmp\") pod \"storage-provisioner\" (UID: \"5c2944e0-d296-4a7e-98e8-dcbf69da9bc7\") " pod="kube-system/storage-provisioner"
	Oct 17 20:07:06 embed-certs-572724 kubelet[1279]: I1017 20:07:06.418325    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17c2ad15-d7b1-4089-8d58-7f9a984c1aa4-config-volume\") pod \"coredns-66bc5c9577-q9n55\" (UID: \"17c2ad15-d7b1-4089-8d58-7f9a984c1aa4\") " pod="kube-system/coredns-66bc5c9577-q9n55"
	Oct 17 20:07:06 embed-certs-572724 kubelet[1279]: I1017 20:07:06.418422    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm8xh\" (UniqueName: \"kubernetes.io/projected/17c2ad15-d7b1-4089-8d58-7f9a984c1aa4-kube-api-access-vm8xh\") pod \"coredns-66bc5c9577-q9n55\" (UID: \"17c2ad15-d7b1-4089-8d58-7f9a984c1aa4\") " pod="kube-system/coredns-66bc5c9577-q9n55"
	Oct 17 20:07:06 embed-certs-572724 kubelet[1279]: W1017 20:07:06.589718    1279 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/crio-5188124083a4a10db5e80fb3cf522534b4129b584e2d3a76ccf4c84d12816034 WatchSource:0}: Error finding container 5188124083a4a10db5e80fb3cf522534b4129b584e2d3a76ccf4c84d12816034: Status 404 returned error can't find the container with id 5188124083a4a10db5e80fb3cf522534b4129b584e2d3a76ccf4c84d12816034
	Oct 17 20:07:06 embed-certs-572724 kubelet[1279]: I1017 20:07:06.993114    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-q9n55" podStartSLOduration=41.99309839 podStartE2EDuration="41.99309839s" podCreationTimestamp="2025-10-17 20:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:06.992624179 +0000 UTC m=+47.500167013" watchObservedRunningTime="2025-10-17 20:07:06.99309839 +0000 UTC m=+47.500641207"
	Oct 17 20:07:08 embed-certs-572724 kubelet[1279]: I1017 20:07:08.999881    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.999856084 podStartE2EDuration="42.999856084s" podCreationTimestamp="2025-10-17 20:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:07:07.058112328 +0000 UTC m=+47.565655144" watchObservedRunningTime="2025-10-17 20:07:08.999856084 +0000 UTC m=+49.507398917"
	Oct 17 20:07:09 embed-certs-572724 kubelet[1279]: I1017 20:07:09.139912    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf455\" (UniqueName: \"kubernetes.io/projected/5f8cf53e-8a62-4677-8c9e-ec9aee8c1cbd-kube-api-access-kf455\") pod \"busybox\" (UID: \"5f8cf53e-8a62-4677-8c9e-ec9aee8c1cbd\") " pod="default/busybox"
	Oct 17 20:07:09 embed-certs-572724 kubelet[1279]: W1017 20:07:09.342576    1279 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/crio-49ff00c27b6d22a005a448dafc7ec3cefd4e953690ed7587911b97c170f34d83 WatchSource:0}: Error finding container 49ff00c27b6d22a005a448dafc7ec3cefd4e953690ed7587911b97c170f34d83: Status 404 returned error can't find the container with id 49ff00c27b6d22a005a448dafc7ec3cefd4e953690ed7587911b97c170f34d83
	
	
	==> storage-provisioner [409ea7dbd3f329bce18c177a5cbd57bd6ad9ab082612d518ba1c69427aa3a9ce] <==
	I1017 20:07:06.659029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:07:06.674029       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:07:06.677217       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:07:06.679942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:06.685527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:07:06.685807       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:07:06.685963       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-572724_555db273-5656-4707-85d6-ae74a56295c8!
	I1017 20:07:06.686861       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4bb96dd9-2ce5-40c2-b9ba-fad4b582ad41", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-572724_555db273-5656-4707-85d6-ae74a56295c8 became leader
	W1017 20:07:06.699104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:06.712341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:07:06.786389       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-572724_555db273-5656-4707-85d6-ae74a56295c8!
	W1017 20:07:08.716312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:08.723322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:10.726945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:10.732064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:12.740151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:12.757583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:14.762440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:14.772330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:16.776739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:16.786196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:18.790620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:18.795257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:20.799778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:07:20.813005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-572724 -n embed-certs-572724
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-572724 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-413711 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-413711 --alsologtostderr -v=1: exit status 80 (2.470921758s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-413711 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:07:43.730675  469836 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:07:43.730872  469836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:43.730879  469836 out.go:374] Setting ErrFile to fd 2...
	I1017 20:07:43.730884  469836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:43.731131  469836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:07:43.731402  469836 out.go:368] Setting JSON to false
	I1017 20:07:43.731420  469836 mustload.go:65] Loading cluster: no-preload-413711
	I1017 20:07:43.731794  469836 config.go:182] Loaded profile config "no-preload-413711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:43.732242  469836 cli_runner.go:164] Run: docker container inspect no-preload-413711 --format={{.State.Status}}
	I1017 20:07:43.772694  469836 host.go:66] Checking if "no-preload-413711" exists ...
	I1017 20:07:43.773053  469836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:43.882555  469836 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-17 20:07:43.872065888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:43.883204  469836 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-413711 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:07:43.886892  469836 out.go:179] * Pausing node no-preload-413711 ... 
	I1017 20:07:43.889891  469836 host.go:66] Checking if "no-preload-413711" exists ...
	I1017 20:07:43.890223  469836 ssh_runner.go:195] Run: systemctl --version
	I1017 20:07:43.890291  469836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-413711
	I1017 20:07:43.919533  469836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/no-preload-413711/id_rsa Username:docker}
	I1017 20:07:44.052580  469836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:07:44.070418  469836 pause.go:52] kubelet running: true
	I1017 20:07:44.070483  469836 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:07:44.444105  469836 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:07:44.444183  469836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:07:44.522907  469836 cri.go:89] found id: "0b04752f912a24f05f3f174f5e038d1bc5c741985152901f520b685c1af6ae22"
	I1017 20:07:44.522976  469836 cri.go:89] found id: "414b28f4d238e57f8f8c4dee16996a2aed70a51a943c5c03f048a67ec51f0bfd"
	I1017 20:07:44.522995  469836 cri.go:89] found id: "4b27d4265c1b55ad8100a7b68272549e8702d789cd0b676fe143e1ba72d3e73f"
	I1017 20:07:44.523015  469836 cri.go:89] found id: "d55286cae111582ea5afb451068692f46116af2dac4163dd91775155dacabc95"
	I1017 20:07:44.523054  469836 cri.go:89] found id: "deaac6f262625b4a8323f78d4de40fa760609f9d1fb3c2272664be7f075fd5a4"
	I1017 20:07:44.523079  469836 cri.go:89] found id: "d3cbad8ffb59387c5fb4641f605385ffcb3d1293c2dbeb606812de21a7dbfcbe"
	I1017 20:07:44.523098  469836 cri.go:89] found id: "c38dce9b2ac325e84a1349d8c32881acb0b877b98f49fe5fd6e22a8ed8a5df1b"
	I1017 20:07:44.523118  469836 cri.go:89] found id: "36109bb4bd5f615a7a96ed9755d97a57c974349fd49cb42b98be4765efc30f76"
	I1017 20:07:44.523150  469836 cri.go:89] found id: "c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535"
	I1017 20:07:44.523169  469836 cri.go:89] found id: "329671f140367b4adca0adf47c66ff41df81a98a236ca514bc725e0955b7dd09"
	I1017 20:07:44.523188  469836 cri.go:89] found id: ""
	I1017 20:07:44.523264  469836 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:07:44.546338  469836 retry.go:31] will retry after 214.926365ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:07:44Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:07:44.761751  469836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:07:44.776698  469836 pause.go:52] kubelet running: false
	I1017 20:07:44.776812  469836 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:07:45.037639  469836 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:07:45.037806  469836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:07:45.218883  469836 cri.go:89] found id: "0b04752f912a24f05f3f174f5e038d1bc5c741985152901f520b685c1af6ae22"
	I1017 20:07:45.218991  469836 cri.go:89] found id: "414b28f4d238e57f8f8c4dee16996a2aed70a51a943c5c03f048a67ec51f0bfd"
	I1017 20:07:45.219024  469836 cri.go:89] found id: "4b27d4265c1b55ad8100a7b68272549e8702d789cd0b676fe143e1ba72d3e73f"
	I1017 20:07:45.219047  469836 cri.go:89] found id: "d55286cae111582ea5afb451068692f46116af2dac4163dd91775155dacabc95"
	I1017 20:07:45.219086  469836 cri.go:89] found id: "deaac6f262625b4a8323f78d4de40fa760609f9d1fb3c2272664be7f075fd5a4"
	I1017 20:07:45.219105  469836 cri.go:89] found id: "d3cbad8ffb59387c5fb4641f605385ffcb3d1293c2dbeb606812de21a7dbfcbe"
	I1017 20:07:45.219127  469836 cri.go:89] found id: "c38dce9b2ac325e84a1349d8c32881acb0b877b98f49fe5fd6e22a8ed8a5df1b"
	I1017 20:07:45.219161  469836 cri.go:89] found id: "36109bb4bd5f615a7a96ed9755d97a57c974349fd49cb42b98be4765efc30f76"
	I1017 20:07:45.219187  469836 cri.go:89] found id: "c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535"
	I1017 20:07:45.219222  469836 cri.go:89] found id: "329671f140367b4adca0adf47c66ff41df81a98a236ca514bc725e0955b7dd09"
	I1017 20:07:45.219252  469836 cri.go:89] found id: ""
	I1017 20:07:45.219349  469836 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:07:45.238211  469836 retry.go:31] will retry after 443.946692ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:07:45Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:07:45.682953  469836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:07:45.698293  469836 pause.go:52] kubelet running: false
	I1017 20:07:45.698440  469836 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:07:45.962356  469836 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:07:45.962517  469836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:07:46.087282  469836 cri.go:89] found id: "0b04752f912a24f05f3f174f5e038d1bc5c741985152901f520b685c1af6ae22"
	I1017 20:07:46.087359  469836 cri.go:89] found id: "414b28f4d238e57f8f8c4dee16996a2aed70a51a943c5c03f048a67ec51f0bfd"
	I1017 20:07:46.087378  469836 cri.go:89] found id: "4b27d4265c1b55ad8100a7b68272549e8702d789cd0b676fe143e1ba72d3e73f"
	I1017 20:07:46.087398  469836 cri.go:89] found id: "d55286cae111582ea5afb451068692f46116af2dac4163dd91775155dacabc95"
	I1017 20:07:46.087442  469836 cri.go:89] found id: "deaac6f262625b4a8323f78d4de40fa760609f9d1fb3c2272664be7f075fd5a4"
	I1017 20:07:46.087464  469836 cri.go:89] found id: "d3cbad8ffb59387c5fb4641f605385ffcb3d1293c2dbeb606812de21a7dbfcbe"
	I1017 20:07:46.087484  469836 cri.go:89] found id: "c38dce9b2ac325e84a1349d8c32881acb0b877b98f49fe5fd6e22a8ed8a5df1b"
	I1017 20:07:46.087516  469836 cri.go:89] found id: "36109bb4bd5f615a7a96ed9755d97a57c974349fd49cb42b98be4765efc30f76"
	I1017 20:07:46.087539  469836 cri.go:89] found id: "c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535"
	I1017 20:07:46.087562  469836 cri.go:89] found id: "329671f140367b4adca0adf47c66ff41df81a98a236ca514bc725e0955b7dd09"
	I1017 20:07:46.087580  469836 cri.go:89] found id: ""
	I1017 20:07:46.087683  469836 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:07:46.105991  469836 out.go:203] 
	W1017 20:07:46.108997  469836 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:07:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:07:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:07:46.109179  469836 out.go:285] * 
	* 
	W1017 20:07:46.116050  469836 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:07:46.120941  469836 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-413711 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-413711
helpers_test.go:243: (dbg) docker inspect no-preload-413711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892",
	        "Created": "2025-10-17T20:05:21.029855804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 465446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:07:04.79365338Z",
	            "FinishedAt": "2025-10-17T20:07:03.97234982Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/hosts",
	        "LogPath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892-json.log",
	        "Name": "/no-preload-413711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-413711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-413711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892",
	                "LowerDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-413711",
	                "Source": "/var/lib/docker/volumes/no-preload-413711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-413711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-413711",
	                "name.minikube.sigs.k8s.io": "no-preload-413711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "550c237edad3d62bff655598e8f8e9300576416b3d63792b97423a656c614e89",
	            "SandboxKey": "/var/run/docker/netns/550c237edad3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-413711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:d3:11:d0:69:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a5bca7265808c00f6c846a52d60c76f955a6009c9954a0d43b577117c15f43c",
	                    "EndpointID": "65f2128bc44f7cb0d162302cb595be9de0ec24a444c78811780648cfe82d942e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-413711",
	                        "b7258d1208d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-413711 -n no-preload-413711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-413711 -n no-preload-413711: exit status 2 (536.223539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-413711 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-413711 logs -n 25: (1.991869996s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-533238 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-533238    │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p cert-options-533238                                                                                                                                                                                                                        │ cert-options-533238    │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-135652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │                     │
	│ stop    │ -p old-k8s-version-135652 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │ 17 Oct 25 20:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-135652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ image   │ old-k8s-version-135652 image list --format=json                                                                                                                                                                                               │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ pause   │ -p old-k8s-version-135652 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │                     │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164379 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:06 UTC │
	│ delete  │ -p cert-expiration-164379                                                                                                                                                                                                                     │ cert-expiration-164379 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724     │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p no-preload-413711 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p no-preload-413711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-572724     │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ stop    │ -p embed-certs-572724 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-572724     │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-572724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-572724     │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724     │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ image   │ no-preload-413711 image list --format=json                                                                                                                                                                                                    │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ pause   │ -p no-preload-413711 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:07:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:07:34.849793  468306 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:07:34.849924  468306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:34.849936  468306 out.go:374] Setting ErrFile to fd 2...
	I1017 20:07:34.849941  468306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:34.850223  468306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:07:34.850680  468306 out.go:368] Setting JSON to false
	I1017 20:07:34.851672  468306 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10206,"bootTime":1760721449,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:07:34.851736  468306 start.go:141] virtualization:  
	I1017 20:07:34.854698  468306 out.go:179] * [embed-certs-572724] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:07:34.858534  468306 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:07:34.858629  468306 notify.go:220] Checking for updates...
	I1017 20:07:34.864462  468306 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:07:34.867407  468306 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:34.870264  468306 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:07:34.873147  468306 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:07:34.876056  468306 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:07:34.879478  468306 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:34.880055  468306 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:07:34.902628  468306 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:07:34.902757  468306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:34.964183  468306 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:07:34.954481593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:34.964297  468306 docker.go:318] overlay module found
	I1017 20:07:34.967598  468306 out.go:179] * Using the docker driver based on existing profile
	I1017 20:07:34.970407  468306 start.go:305] selected driver: docker
	I1017 20:07:34.970426  468306 start.go:925] validating driver "docker" against &{Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:34.970523  468306 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:07:34.971255  468306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:35.032293  468306 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:07:35.022463105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:35.032696  468306 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:07:35.032729  468306 cni.go:84] Creating CNI manager for ""
	I1017 20:07:35.032790  468306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:07:35.032840  468306 start.go:349] cluster config:
	{Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:35.037791  468306 out.go:179] * Starting "embed-certs-572724" primary control-plane node in "embed-certs-572724" cluster
	I1017 20:07:35.040713  468306 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:07:35.043686  468306 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:07:35.046607  468306 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:07:35.046765  468306 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:07:35.046799  468306 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:07:35.046811  468306 cache.go:58] Caching tarball of preloaded images
	I1017 20:07:35.046887  468306 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:07:35.046902  468306 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:07:35.047028  468306 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/config.json ...
	I1017 20:07:35.066302  468306 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:07:35.066326  468306 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:07:35.066340  468306 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:07:35.066363  468306 start.go:360] acquireMachinesLock for embed-certs-572724: {Name:mkd392efc9f089fa6f99fda7caa0023fa20afc6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:35.066426  468306 start.go:364] duration metric: took 37.628µs to acquireMachinesLock for "embed-certs-572724"
	I1017 20:07:35.066451  468306 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:07:35.066461  468306 fix.go:54] fixHost starting: 
	I1017 20:07:35.066728  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:35.083842  468306 fix.go:112] recreateIfNeeded on embed-certs-572724: state=Stopped err=<nil>
	W1017 20:07:35.083876  468306 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:07:35.087229  468306 out.go:252] * Restarting existing docker container for "embed-certs-572724" ...
	I1017 20:07:35.087395  468306 cli_runner.go:164] Run: docker start embed-certs-572724
	I1017 20:07:35.362644  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:35.387680  468306 kic.go:430] container "embed-certs-572724" state is running.
	I1017 20:07:35.388077  468306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-572724
	I1017 20:07:35.413020  468306 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/config.json ...
	I1017 20:07:35.413244  468306 machine.go:93] provisionDockerMachine start ...
	I1017 20:07:35.413312  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:35.431537  468306 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:35.432020  468306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1017 20:07:35.432035  468306 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:07:35.434112  468306 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:07:38.583975  468306 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-572724
	
	I1017 20:07:38.584011  468306 ubuntu.go:182] provisioning hostname "embed-certs-572724"
	I1017 20:07:38.584095  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:38.601157  468306 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:38.601469  468306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1017 20:07:38.601485  468306 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-572724 && echo "embed-certs-572724" | sudo tee /etc/hostname
	I1017 20:07:38.767296  468306 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-572724
	
	I1017 20:07:38.767389  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:38.789437  468306 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:38.789745  468306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1017 20:07:38.789763  468306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-572724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-572724/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-572724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:07:38.936611  468306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:07:38.936636  468306 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:07:38.936674  468306 ubuntu.go:190] setting up certificates
	I1017 20:07:38.936683  468306 provision.go:84] configureAuth start
	I1017 20:07:38.936746  468306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-572724
	I1017 20:07:38.954771  468306 provision.go:143] copyHostCerts
	I1017 20:07:38.954845  468306 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:07:38.954860  468306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:07:38.954944  468306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:07:38.955050  468306 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:07:38.955059  468306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:07:38.955090  468306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:07:38.955160  468306 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:07:38.955170  468306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:07:38.955197  468306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:07:38.955261  468306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.embed-certs-572724 san=[127.0.0.1 192.168.85.2 embed-certs-572724 localhost minikube]
	I1017 20:07:39.036084  468306 provision.go:177] copyRemoteCerts
	I1017 20:07:39.036147  468306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:07:39.036193  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.053325  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:39.156328  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:07:39.177364  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:07:39.196184  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:07:39.216327  468306 provision.go:87] duration metric: took 279.60015ms to configureAuth
	I1017 20:07:39.216354  468306 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:07:39.216584  468306 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:39.216697  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.234368  468306 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:39.234691  468306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1017 20:07:39.234712  468306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:07:39.563336  468306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:07:39.563359  468306 machine.go:96] duration metric: took 4.150105597s to provisionDockerMachine
	I1017 20:07:39.563370  468306 start.go:293] postStartSetup for "embed-certs-572724" (driver="docker")
	I1017 20:07:39.563381  468306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:07:39.563437  468306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:07:39.563483  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.586650  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:39.692733  468306 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:07:39.696074  468306 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:07:39.696103  468306 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:07:39.696115  468306 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:07:39.696179  468306 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:07:39.696257  468306 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:07:39.696374  468306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:07:39.703888  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:07:39.721381  468306 start.go:296] duration metric: took 157.995475ms for postStartSetup
	I1017 20:07:39.721476  468306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:07:39.721514  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.739405  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:39.846239  468306 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:07:39.851490  468306 fix.go:56] duration metric: took 4.785022586s for fixHost
	I1017 20:07:39.851517  468306 start.go:83] releasing machines lock for "embed-certs-572724", held for 4.78507719s
	I1017 20:07:39.851628  468306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-572724
	I1017 20:07:39.868975  468306 ssh_runner.go:195] Run: cat /version.json
	I1017 20:07:39.869028  468306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:07:39.869033  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.869092  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.889175  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:39.902745  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:39.996299  468306 ssh_runner.go:195] Run: systemctl --version
	I1017 20:07:40.093138  468306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:07:40.140073  468306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:07:40.145181  468306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:07:40.145284  468306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:07:40.154101  468306 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:07:40.154127  468306 start.go:495] detecting cgroup driver to use...
	I1017 20:07:40.154172  468306 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:07:40.154247  468306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:07:40.169608  468306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:07:40.183869  468306 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:07:40.183982  468306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:07:40.201197  468306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:07:40.214894  468306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:07:40.341351  468306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:07:40.457323  468306 docker.go:234] disabling docker service ...
	I1017 20:07:40.457385  468306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:07:40.472311  468306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:07:40.485665  468306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:07:40.596436  468306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:07:40.716557  468306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:07:40.729776  468306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:07:40.743649  468306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:07:40.743759  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.754371  468306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:07:40.754462  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.764674  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.773549  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.782659  468306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:07:40.791361  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.800389  468306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.808904  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.817640  468306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:07:40.825469  468306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:07:40.832858  468306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:07:40.950594  468306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:07:41.083720  468306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:07:41.083874  468306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:07:41.088061  468306 start.go:563] Will wait 60s for crictl version
	I1017 20:07:41.088129  468306 ssh_runner.go:195] Run: which crictl
	I1017 20:07:41.091915  468306 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:07:41.117178  468306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:07:41.117336  468306 ssh_runner.go:195] Run: crio --version
	I1017 20:07:41.152730  468306 ssh_runner.go:195] Run: crio --version
	I1017 20:07:41.189528  468306 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:07:41.192362  468306 cli_runner.go:164] Run: docker network inspect embed-certs-572724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:07:41.208095  468306 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:07:41.212019  468306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:07:41.221837  468306 kubeadm.go:883] updating cluster {Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:07:41.221959  468306 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:07:41.222010  468306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:07:41.257308  468306 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:07:41.257334  468306 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:07:41.257387  468306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:07:41.284455  468306 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:07:41.284481  468306 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:07:41.284489  468306 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 20:07:41.284613  468306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-572724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:07:41.284697  468306 ssh_runner.go:195] Run: crio config
	I1017 20:07:41.352110  468306 cni.go:84] Creating CNI manager for ""
	I1017 20:07:41.352178  468306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:07:41.352213  468306 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:07:41.352269  468306 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-572724 NodeName:embed-certs-572724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:07:41.352428  468306 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-572724"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:07:41.352550  468306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:07:41.359826  468306 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:07:41.359922  468306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:07:41.367044  468306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 20:07:41.380084  468306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:07:41.398543  468306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1017 20:07:41.411621  468306 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:07:41.415055  468306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:07:41.425036  468306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:07:41.544269  468306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:07:41.560988  468306 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724 for IP: 192.168.85.2
	I1017 20:07:41.561018  468306 certs.go:195] generating shared ca certs ...
	I1017 20:07:41.561039  468306 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:41.561184  468306 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:07:41.561235  468306 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:07:41.561246  468306 certs.go:257] generating profile certs ...
	I1017 20:07:41.561340  468306 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/client.key
	I1017 20:07:41.561413  468306 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.key.5b851251
	I1017 20:07:41.561459  468306 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.key
	I1017 20:07:41.561592  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:07:41.561633  468306 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:07:41.561644  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:07:41.561675  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:07:41.561711  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:07:41.561736  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:07:41.561789  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:07:41.562427  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:07:41.586475  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:07:41.604935  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:07:41.627823  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:07:41.650972  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 20:07:41.671270  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:07:41.693047  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:07:41.711639  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:07:41.731254  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:07:41.763659  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:07:41.793525  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:07:41.811457  468306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:07:41.828088  468306 ssh_runner.go:195] Run: openssl version
	I1017 20:07:41.834843  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:07:41.843485  468306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:07:41.847660  468306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:07:41.847734  468306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:07:41.892058  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:07:41.903266  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:07:41.913779  468306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:07:41.919293  468306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:07:41.919362  468306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:07:41.964158  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:07:41.973474  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:07:41.982365  468306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:07:41.986420  468306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:07:41.986514  468306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:07:42.037049  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:07:42.046394  468306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:07:42.050769  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:07:42.099256  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:07:42.156028  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:07:42.229300  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:07:42.306281  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:07:42.367886  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:07:42.434746  468306 kubeadm.go:400] StartCluster: {Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:42.434847  468306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:07:42.434918  468306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:07:42.515664  468306 cri.go:89] found id: "711a3fa869605d5a18b3f9781975225dfdd63bf72d85af3b2ba7101a28d13528"
	I1017 20:07:42.515689  468306 cri.go:89] found id: "e224a6e5eb1ca81a4a48fbcc8536252f742bddc7bc1c3afbd37a26b29ac8c998"
	I1017 20:07:42.515694  468306 cri.go:89] found id: "0c97fc08388e70c856c936895f529c1a760925d708cce00a9944a4dd9c8d36a3"
	I1017 20:07:42.515707  468306 cri.go:89] found id: "2e90f4799ad4c01480d7887c5d52c632cc0dc3dea6d59784485224961e8a45af"
	I1017 20:07:42.515711  468306 cri.go:89] found id: ""
	I1017 20:07:42.515764  468306 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:07:42.538976  468306 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:07:42Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:07:42.539057  468306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:07:42.555063  468306 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:07:42.555086  468306 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:07:42.555154  468306 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:07:42.565492  468306 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:07:42.566201  468306 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-572724" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:42.566497  468306 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-572724" cluster setting kubeconfig missing "embed-certs-572724" context setting]
	I1017 20:07:42.567043  468306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:42.568912  468306 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:07:42.583510  468306 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1017 20:07:42.583585  468306 kubeadm.go:601] duration metric: took 28.491827ms to restartPrimaryControlPlane
	I1017 20:07:42.583611  468306 kubeadm.go:402] duration metric: took 148.876776ms to StartCluster
	I1017 20:07:42.583651  468306 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:42.583739  468306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:42.585118  468306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:42.585690  468306 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:42.585852  468306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:07:42.585939  468306 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-572724"
	I1017 20:07:42.585955  468306 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-572724"
	W1017 20:07:42.585961  468306 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:07:42.585982  468306 host.go:66] Checking if "embed-certs-572724" exists ...
	I1017 20:07:42.586493  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:42.586674  468306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:07:42.587077  468306 addons.go:69] Setting dashboard=true in profile "embed-certs-572724"
	I1017 20:07:42.587100  468306 addons.go:238] Setting addon dashboard=true in "embed-certs-572724"
	W1017 20:07:42.587107  468306 addons.go:247] addon dashboard should already be in state true
	I1017 20:07:42.587129  468306 host.go:66] Checking if "embed-certs-572724" exists ...
	I1017 20:07:42.587203  468306 addons.go:69] Setting default-storageclass=true in profile "embed-certs-572724"
	I1017 20:07:42.587226  468306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-572724"
	I1017 20:07:42.587543  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:42.587548  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:42.602770  468306 out.go:179] * Verifying Kubernetes components...
	I1017 20:07:42.606155  468306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:07:42.641130  468306 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:07:42.644449  468306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:07:42.644469  468306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:07:42.644578  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:42.652630  468306 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 20:07:42.655593  468306 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 20:07:42.660604  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 20:07:42.660631  468306 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 20:07:42.660708  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:42.685947  468306 addons.go:238] Setting addon default-storageclass=true in "embed-certs-572724"
	W1017 20:07:42.685973  468306 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:07:42.685996  468306 host.go:66] Checking if "embed-certs-572724" exists ...
	I1017 20:07:42.686424  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:42.719027  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:42.726711  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:42.744973  468306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:07:42.744998  468306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:07:42.745065  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:42.776342  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:42.964385  468306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:07:42.999578  468306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:07:43.026235  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 20:07:43.026260  468306 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 20:07:43.101579  468306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:07:43.116145  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 20:07:43.116208  468306 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 20:07:43.240163  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 20:07:43.240185  468306 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 20:07:43.349237  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 20:07:43.349258  468306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 20:07:43.417780  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 20:07:43.417801  468306 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 20:07:43.447636  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 20:07:43.447658  468306 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 20:07:43.478259  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 20:07:43.478286  468306 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 20:07:43.508624  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 20:07:43.508651  468306 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 20:07:43.537990  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:07:43.538017  468306 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 20:07:43.557738  468306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 17 20:07:31 no-preload-413711 crio[649]: time="2025-10-17T20:07:31.982752874Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=28b1b660-9915-417c-b0b5-84a0889f2267 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:07:31 no-preload-413711 crio[649]: time="2025-10-17T20:07:31.985778875Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=0d7a9e8d-bf4a-4a53-9148-8d80470f626e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:31 no-preload-413711 crio[649]: time="2025-10-17T20:07:31.989912004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.012931013Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.016300151Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.037610595Z" level=info msg="Created container 64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=0d7a9e8d-bf4a-4a53-9148-8d80470f626e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.03872766Z" level=info msg="Starting container: 64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42" id=af1f44fe-13be-4d61-af3f-c4b3aa8b717a name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.044663275Z" level=info msg="Started container" PID=1624 containerID=64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper id=af1f44fe-13be-4d61-af3f-c4b3aa8b717a name=/runtime.v1.RuntimeService/StartContainer sandboxID=45a730cbae2755baadd8a3a1827987a9cf8d4927434b1f82e745d3140e823f34
	Oct 17 20:07:32 no-preload-413711 conmon[1622]: conmon 64af5fceeeafdefcc6c0 <ninfo>: container 1624 exited with status 1
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.984512639Z" level=info msg="Removing container: e882223772e76094cdb3b872f5f2ab97060adcf67c968d12247fd25c2a1a47c1" id=55567c4e-7792-4d2b-8d1b-37096f72ee03 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.992074752Z" level=info msg="Error loading conmon cgroup of container e882223772e76094cdb3b872f5f2ab97060adcf67c968d12247fd25c2a1a47c1: cgroup deleted" id=55567c4e-7792-4d2b-8d1b-37096f72ee03 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.998512556Z" level=info msg="Removed container e882223772e76094cdb3b872f5f2ab97060adcf67c968d12247fd25c2a1a47c1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=55567c4e-7792-4d2b-8d1b-37096f72ee03 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.108247639Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=69f48064-53a4-45ac-9be0-fccdbf2294a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.113995657Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cc934f01-95fb-4ac4-964e-0407ce6d1cb9 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.122828242Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=f59a35c3-182e-4eda-a31a-879a0f860737 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.123141628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.162168984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.163025855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.219878795Z" level=info msg="Created container c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=f59a35c3-182e-4eda-a31a-879a0f860737 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.221333606Z" level=info msg="Starting container: c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535" id=2242ac62-7434-4f20-ad8a-a2f00a56c3ab name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.2290189Z" level=info msg="Started container" PID=1642 containerID=c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper id=2242ac62-7434-4f20-ad8a-a2f00a56c3ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=45a730cbae2755baadd8a3a1827987a9cf8d4927434b1f82e745d3140e823f34
	Oct 17 20:07:42 no-preload-413711 conmon[1640]: conmon c6750f7e08419a1ec1ff <ninfo>: container 1642 exited with status 1
	Oct 17 20:07:43 no-preload-413711 crio[649]: time="2025-10-17T20:07:43.016606631Z" level=info msg="Removing container: 64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42" id=bb449ff1-ed37-4947-824f-2f0e69b6411f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:43 no-preload-413711 crio[649]: time="2025-10-17T20:07:43.028834561Z" level=info msg="Error loading conmon cgroup of container 64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42: cgroup deleted" id=bb449ff1-ed37-4947-824f-2f0e69b6411f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:43 no-preload-413711 crio[649]: time="2025-10-17T20:07:43.032081725Z" level=info msg="Removed container 64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=bb449ff1-ed37-4947-824f-2f0e69b6411f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c6750f7e08419       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   2                   45a730cbae275       dashboard-metrics-scraper-6ffb444bf9-2v5z9   kubernetes-dashboard
	329671f140367       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   20 seconds ago      Running             kubernetes-dashboard        0                   7bce548b88fd7       kubernetes-dashboard-855c9754f9-s7s2d        kubernetes-dashboard
	0b04752f912a2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           27 seconds ago      Running             coredns                     1                   dde3440e59d93       coredns-66bc5c9577-4bslb                     kube-system
	564831d0d0018       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           27 seconds ago      Running             busybox                     1                   69afa65e98ed3       busybox                                      default
	414b28f4d238e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           27 seconds ago      Running             storage-provisioner         1                   e9b770955adb9       storage-provisioner                          kube-system
	4b27d4265c1b5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           27 seconds ago      Running             kube-proxy                  1                   35790673a9627       kube-proxy-kl48k                             kube-system
	d55286cae1115       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           27 seconds ago      Running             kindnet-cni                 1                   c9c6bd1798f94       kindnet-7jkvq                                kube-system
	deaac6f262625       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           35 seconds ago      Running             kube-controller-manager     1                   19a82b0c8db07       kube-controller-manager-no-preload-413711    kube-system
	d3cbad8ffb593       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           35 seconds ago      Running             kube-apiserver              1                   b6bc3b8d65923       kube-apiserver-no-preload-413711             kube-system
	c38dce9b2ac32       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           35 seconds ago      Running             kube-scheduler              1                   79ff4e4190b6f       kube-scheduler-no-preload-413711             kube-system
	36109bb4bd5f6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           35 seconds ago      Running             etcd                        1                   e6bbb0a03025b       etcd-no-preload-413711                       kube-system
	
	
	==> coredns [0b04752f912a24f05f3f174f5e038d1bc5c741985152901f520b685c1af6ae22] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47414 - 8795 "HINFO IN 8918335285238813650.351980678888439187. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013189742s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-413711
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-413711
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=no-preload-413711
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_06_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:06:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-413711
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:07:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:07:38 +0000   Fri, 17 Oct 2025 20:06:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:07:38 +0000   Fri, 17 Oct 2025 20:06:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:07:38 +0000   Fri, 17 Oct 2025 20:06:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:07:38 +0000   Fri, 17 Oct 2025 20:06:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-413711
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b8affef4-ca65-41f6-ac3b-b82ba141b1e4
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 coredns-66bc5c9577-4bslb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     87s
	  kube-system                 etcd-no-preload-413711                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         95s
	  kube-system                 kindnet-7jkvq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      87s
	  kube-system                 kube-apiserver-no-preload-413711              250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-no-preload-413711     200m (10%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-kl48k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-no-preload-413711              100m (5%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2v5z9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s7s2d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 85s                  kube-proxy       
	  Normal   Starting                 27s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node no-preload-413711 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node no-preload-413711 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     103s (x8 over 103s)  kubelet          Node no-preload-413711 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    92s                  kubelet          Node no-preload-413711 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 92s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s                  kubelet          Node no-preload-413711 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     92s                  kubelet          Node no-preload-413711 status is now: NodeHasSufficientPID
	  Normal   Starting                 92s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           88s                  node-controller  Node no-preload-413711 event: Registered Node no-preload-413711 in Controller
	  Normal   NodeReady                71s                  kubelet          Node no-preload-413711 status is now: NodeReady
	  Normal   Starting                 37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  37s (x8 over 37s)    kubelet          Node no-preload-413711 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s (x8 over 37s)    kubelet          Node no-preload-413711 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     37s (x8 over 37s)    kubelet          Node no-preload-413711 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27s                  node-controller  Node no-preload-413711 event: Registered Node no-preload-413711 in Controller
	
	
	==> dmesg <==
	[Oct17 19:43] overlayfs: idmapped layers are currently not supported
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	[ +30.002292] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [36109bb4bd5f615a7a96ed9755d97a57c974349fd49cb42b98be4765efc30f76] <==
	{"level":"warn","ts":"2025-10-17T20:07:16.343146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.365571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.381248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.416701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.439400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.449517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.468754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.506241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.517238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.545953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.555446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.573223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.595332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.608508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.628600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.645407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.663882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.690305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.704670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.719267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.738135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.762724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.798372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.807271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.896728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44654","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:07:48 up  2:50,  0 user,  load average: 9.12, 5.15, 3.45
	Linux no-preload-413711 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d55286cae111582ea5afb451068692f46116af2dac4163dd91775155dacabc95] <==
	I1017 20:07:19.943156       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:07:19.944097       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:07:19.944272       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:07:19.944322       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:07:19.944364       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:07:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:07:20.206796       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:07:20.206872       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:07:20.206905       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:07:20.207674       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [d3cbad8ffb59387c5fb4641f605385ffcb3d1293c2dbeb606812de21a7dbfcbe] <==
	I1017 20:07:18.083869       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:07:18.083929       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:07:18.105118       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:07:18.140595       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:07:18.140956       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:07:18.140966       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:07:18.141459       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:07:18.141915       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:07:18.156221       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:07:18.156502       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:07:18.156935       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:07:18.160064       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:07:18.169385       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1017 20:07:18.207380       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:07:18.550764       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:07:18.827141       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:07:18.882387       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:07:18.935661       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:07:18.949755       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:07:18.962454       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:07:19.052345       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.134.138"}
	I1017 20:07:19.067909       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.230.36"}
	I1017 20:07:21.634587       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:07:21.689044       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:07:21.878930       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [deaac6f262625b4a8323f78d4de40fa760609f9d1fb3c2272664be7f075fd5a4] <==
	I1017 20:07:21.273079       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:07:21.275887       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:07:21.276126       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:07:21.276158       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:07:21.277999       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:07:21.279832       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:07:21.279990       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:07:21.287650       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:07:21.295422       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:07:21.300649       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:07:21.300707       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:07:21.300730       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:07:21.300735       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:07:21.300741       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:07:21.313204       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 20:07:21.313296       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:07:21.313390       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-413711"
	I1017 20:07:21.313443       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:07:21.313986       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:07:21.318836       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:07:21.324799       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:07:21.325952       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:07:21.327091       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:07:21.330475       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:07:21.909434       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [4b27d4265c1b55ad8100a7b68272549e8702d789cd0b676fe143e1ba72d3e73f] <==
	I1017 20:07:20.196073       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:07:20.518271       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:07:20.620669       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:07:20.620790       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:07:20.620915       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:07:20.647443       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:07:20.647561       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:07:20.653381       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:07:20.654494       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:07:20.654955       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:20.656280       1 config.go:200] "Starting service config controller"
	I1017 20:07:20.656388       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:07:20.656432       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:07:20.656462       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:07:20.656497       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:07:20.656550       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:07:20.657203       1 config.go:309] "Starting node config controller"
	I1017 20:07:20.661812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:07:20.661873       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:07:20.757438       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:07:20.757481       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:07:20.757518       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c38dce9b2ac325e84a1349d8c32881acb0b877b98f49fe5fd6e22a8ed8a5df1b] <==
	W1017 20:07:17.822630       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:07:17.825427       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:07:17.825455       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:07:17.825480       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:07:17.970813       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:07:17.970846       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:17.988821       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:07:17.989280       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:17.989483       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:17.989317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 20:07:18.006038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:07:18.047915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:07:18.048058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:07:18.048154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:07:18.048221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:07:18.048289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:07:18.049287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:07:18.053964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:07:18.054054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:07:18.054128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:07:18.054196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:07:18.054247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:07:18.054296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:07:18.054451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1017 20:07:18.091248       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:07:18 no-preload-413711 kubelet[767]: E1017 20:07:18.992077     767 projected.go:196] Error preparing data for projected volume kube-api-access-cl96m for pod kube-system/kindnet-7jkvq: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:no-preload-413711" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-413711' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Oct 17 20:07:18 no-preload-413711 kubelet[767]: E1017 20:07:18.992106     767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a848c0df-632d-4733-9f76-1ed315cae3be-kube-api-access-cl96m podName:a848c0df-632d-4733-9f76-1ed315cae3be nodeName:}" failed. No retries permitted until 2025-10-17 20:07:19.492099829 +0000 UTC m=+7.849524286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cl96m" (UniqueName: "kubernetes.io/projected/a848c0df-632d-4733-9f76-1ed315cae3be-kube-api-access-cl96m") pod "kindnet-7jkvq" (UID: "a848c0df-632d-4733-9f76-1ed315cae3be") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:no-preload-413711" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-413711' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Oct 17 20:07:18 no-preload-413711 kubelet[767]: E1017 20:07:18.992123     767 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 17 20:07:18 no-preload-413711 kubelet[767]: E1017 20:07:18.992132     767 projected.go:196] Error preparing data for projected volume kube-api-access-p4vpl for pod default/busybox: [failed to fetch token: serviceaccounts "default" is forbidden: User "system:node:no-preload-413711" cannot create resource "serviceaccounts/token" in API group "" in the namespace "default": no relationship found between node 'no-preload-413711' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Oct 17 20:07:18 no-preload-413711 kubelet[767]: E1017 20:07:18.992156     767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8776954-7870-4b04-a178-bc73c09ccec1-kube-api-access-p4vpl podName:e8776954-7870-4b04-a178-bc73c09ccec1 nodeName:}" failed. No retries permitted until 2025-10-17 20:07:19.492149895 +0000 UTC m=+7.849574352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p4vpl" (UniqueName: "kubernetes.io/projected/e8776954-7870-4b04-a178-bc73c09ccec1-kube-api-access-p4vpl") pod "busybox" (UID: "e8776954-7870-4b04-a178-bc73c09ccec1") : [failed to fetch token: serviceaccounts "default" is forbidden: User "system:node:no-preload-413711" cannot create resource "serviceaccounts/token" in API group "" in the namespace "default": no relationship found between node 'no-preload-413711' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Oct 17 20:07:19 no-preload-413711 kubelet[767]: I1017 20:07:19.525210     767 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:07:21 no-preload-413711 kubelet[767]: I1017 20:07:21.831989     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c1c9f0ad-711b-4d30-8118-7bf18df1e175-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2v5z9\" (UID: \"c1c9f0ad-711b-4d30-8118-7bf18df1e175\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9"
	Oct 17 20:07:21 no-preload-413711 kubelet[767]: I1017 20:07:21.832048     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp8x9\" (UniqueName: \"kubernetes.io/projected/c1c9f0ad-711b-4d30-8118-7bf18df1e175-kube-api-access-fp8x9\") pod \"dashboard-metrics-scraper-6ffb444bf9-2v5z9\" (UID: \"c1c9f0ad-711b-4d30-8118-7bf18df1e175\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9"
	Oct 17 20:07:21 no-preload-413711 kubelet[767]: I1017 20:07:21.932567     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92v2\" (UniqueName: \"kubernetes.io/projected/4c45f2f1-d92a-465c-84fd-c82ef9c49fda-kube-api-access-q92v2\") pod \"kubernetes-dashboard-855c9754f9-s7s2d\" (UID: \"4c45f2f1-d92a-465c-84fd-c82ef9c49fda\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s7s2d"
	Oct 17 20:07:21 no-preload-413711 kubelet[767]: I1017 20:07:21.932672     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4c45f2f1-d92a-465c-84fd-c82ef9c49fda-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-s7s2d\" (UID: \"4c45f2f1-d92a-465c-84fd-c82ef9c49fda\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s7s2d"
	Oct 17 20:07:22 no-preload-413711 kubelet[767]: W1017 20:07:22.173479     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/crio-45a730cbae2755baadd8a3a1827987a9cf8d4927434b1f82e745d3140e823f34 WatchSource:0}: Error finding container 45a730cbae2755baadd8a3a1827987a9cf8d4927434b1f82e745d3140e823f34: Status 404 returned error can't find the container with id 45a730cbae2755baadd8a3a1827987a9cf8d4927434b1f82e745d3140e823f34
	Oct 17 20:07:30 no-preload-413711 kubelet[767]: I1017 20:07:30.144086     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s7s2d" podStartSLOduration=4.499381478 podStartE2EDuration="9.144065186s" podCreationTimestamp="2025-10-17 20:07:21 +0000 UTC" firstStartedPulling="2025-10-17 20:07:22.191647867 +0000 UTC m=+10.549072324" lastFinishedPulling="2025-10-17 20:07:26.836331493 +0000 UTC m=+15.193756032" observedRunningTime="2025-10-17 20:07:26.984693772 +0000 UTC m=+15.342118221" watchObservedRunningTime="2025-10-17 20:07:30.144065186 +0000 UTC m=+18.501489635"
	Oct 17 20:07:31 no-preload-413711 kubelet[767]: I1017 20:07:31.977210     767 scope.go:117] "RemoveContainer" containerID="e882223772e76094cdb3b872f5f2ab97060adcf67c968d12247fd25c2a1a47c1"
	Oct 17 20:07:32 no-preload-413711 kubelet[767]: I1017 20:07:32.982331     767 scope.go:117] "RemoveContainer" containerID="e882223772e76094cdb3b872f5f2ab97060adcf67c968d12247fd25c2a1a47c1"
	Oct 17 20:07:32 no-preload-413711 kubelet[767]: I1017 20:07:32.982872     767 scope.go:117] "RemoveContainer" containerID="64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42"
	Oct 17 20:07:32 no-preload-413711 kubelet[767]: E1017 20:07:32.983075     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2v5z9_kubernetes-dashboard(c1c9f0ad-711b-4d30-8118-7bf18df1e175)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9" podUID="c1c9f0ad-711b-4d30-8118-7bf18df1e175"
	Oct 17 20:07:33 no-preload-413711 kubelet[767]: I1017 20:07:33.986799     767 scope.go:117] "RemoveContainer" containerID="64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42"
	Oct 17 20:07:33 no-preload-413711 kubelet[767]: E1017 20:07:33.986955     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2v5z9_kubernetes-dashboard(c1c9f0ad-711b-4d30-8118-7bf18df1e175)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9" podUID="c1c9f0ad-711b-4d30-8118-7bf18df1e175"
	Oct 17 20:07:42 no-preload-413711 kubelet[767]: I1017 20:07:42.106753     767 scope.go:117] "RemoveContainer" containerID="64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42"
	Oct 17 20:07:43 no-preload-413711 kubelet[767]: I1017 20:07:43.012430     767 scope.go:117] "RemoveContainer" containerID="64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42"
	Oct 17 20:07:43 no-preload-413711 kubelet[767]: I1017 20:07:43.013226     767 scope.go:117] "RemoveContainer" containerID="c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535"
	Oct 17 20:07:43 no-preload-413711 kubelet[767]: E1017 20:07:43.013524     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2v5z9_kubernetes-dashboard(c1c9f0ad-711b-4d30-8118-7bf18df1e175)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9" podUID="c1c9f0ad-711b-4d30-8118-7bf18df1e175"
	Oct 17 20:07:44 no-preload-413711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:07:44 no-preload-413711 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:07:44 no-preload-413711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [329671f140367b4adca0adf47c66ff41df81a98a236ca514bc725e0955b7dd09] <==
	2025/10/17 20:07:26 Using namespace: kubernetes-dashboard
	2025/10/17 20:07:26 Using in-cluster config to connect to apiserver
	2025/10/17 20:07:26 Using secret token for csrf signing
	2025/10/17 20:07:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:07:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:07:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:07:26 Generating JWE encryption key
	2025/10/17 20:07:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:07:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:07:27 Initializing JWE encryption key from synchronized object
	2025/10/17 20:07:27 Creating in-cluster Sidecar client
	2025/10/17 20:07:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:07:27 Serving insecurely on HTTP port: 9090
	2025/10/17 20:07:26 Starting overwatch
	
	
	==> storage-provisioner [414b28f4d238e57f8f8c4dee16996a2aed70a51a943c5c03f048a67ec51f0bfd] <==
	I1017 20:07:19.997162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-413711 -n no-preload-413711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-413711 -n no-preload-413711: exit status 2 (581.330563ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-413711 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-413711
helpers_test.go:243: (dbg) docker inspect no-preload-413711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892",
	        "Created": "2025-10-17T20:05:21.029855804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 465446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:07:04.79365338Z",
	            "FinishedAt": "2025-10-17T20:07:03.97234982Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/hosts",
	        "LogPath": "/var/lib/docker/containers/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892-json.log",
	        "Name": "/no-preload-413711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-413711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-413711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892",
	                "LowerDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed62f8f42dc7e0fa7067620dab65511a6702191cd284d34799df57c74af977a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-413711",
	                "Source": "/var/lib/docker/volumes/no-preload-413711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-413711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-413711",
	                "name.minikube.sigs.k8s.io": "no-preload-413711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "550c237edad3d62bff655598e8f8e9300576416b3d63792b97423a656c614e89",
	            "SandboxKey": "/var/run/docker/netns/550c237edad3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-413711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:d3:11:d0:69:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a5bca7265808c00f6c846a52d60c76f955a6009c9954a0d43b577117c15f43c",
	                    "EndpointID": "65f2128bc44f7cb0d162302cb595be9de0ec24a444c78811780648cfe82d942e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-413711",
	                        "b7258d1208d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-413711 -n no-preload-413711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-413711 -n no-preload-413711: exit status 2 (539.661163ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-413711 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-413711 logs -n 25: (1.914928659s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-533238 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-533238    │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ delete  │ -p cert-options-533238                                                                                                                                                                                                                        │ cert-options-533238    │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:02 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:02 UTC │ 17 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-135652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │                     │
	│ stop    │ -p old-k8s-version-135652 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │ 17 Oct 25 20:04 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-135652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ image   │ old-k8s-version-135652 image list --format=json                                                                                                                                                                                               │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ pause   │ -p old-k8s-version-135652 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │                     │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164379 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:06 UTC │
	│ delete  │ -p cert-expiration-164379                                                                                                                                                                                                                     │ cert-expiration-164379 │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724     │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p no-preload-413711 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p no-preload-413711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-572724     │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ stop    │ -p embed-certs-572724 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-572724     │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-572724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-572724     │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724     │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ image   │ no-preload-413711 image list --format=json                                                                                                                                                                                                    │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ pause   │ -p no-preload-413711 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-413711      │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:07:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:07:34.849793  468306 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:07:34.849924  468306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:34.849936  468306 out.go:374] Setting ErrFile to fd 2...
	I1017 20:07:34.849941  468306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:34.850223  468306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:07:34.850680  468306 out.go:368] Setting JSON to false
	I1017 20:07:34.851672  468306 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10206,"bootTime":1760721449,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:07:34.851736  468306 start.go:141] virtualization:  
	I1017 20:07:34.854698  468306 out.go:179] * [embed-certs-572724] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:07:34.858534  468306 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:07:34.858629  468306 notify.go:220] Checking for updates...
	I1017 20:07:34.864462  468306 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:07:34.867407  468306 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:34.870264  468306 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:07:34.873147  468306 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:07:34.876056  468306 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:07:34.879478  468306 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:34.880055  468306 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:07:34.902628  468306 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:07:34.902757  468306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:34.964183  468306 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:07:34.954481593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:34.964297  468306 docker.go:318] overlay module found
	I1017 20:07:34.967598  468306 out.go:179] * Using the docker driver based on existing profile
	I1017 20:07:34.970407  468306 start.go:305] selected driver: docker
	I1017 20:07:34.970426  468306 start.go:925] validating driver "docker" against &{Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:34.970523  468306 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:07:34.971255  468306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:35.032293  468306 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:07:35.022463105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:35.032696  468306 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:07:35.032729  468306 cni.go:84] Creating CNI manager for ""
	I1017 20:07:35.032790  468306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:07:35.032840  468306 start.go:349] cluster config:
	{Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:35.037791  468306 out.go:179] * Starting "embed-certs-572724" primary control-plane node in "embed-certs-572724" cluster
	I1017 20:07:35.040713  468306 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:07:35.043686  468306 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:07:35.046607  468306 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:07:35.046765  468306 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:07:35.046799  468306 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:07:35.046811  468306 cache.go:58] Caching tarball of preloaded images
	I1017 20:07:35.046887  468306 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:07:35.046902  468306 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:07:35.047028  468306 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/config.json ...
	I1017 20:07:35.066302  468306 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:07:35.066326  468306 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:07:35.066340  468306 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:07:35.066363  468306 start.go:360] acquireMachinesLock for embed-certs-572724: {Name:mkd392efc9f089fa6f99fda7caa0023fa20afc6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:35.066426  468306 start.go:364] duration metric: took 37.628µs to acquireMachinesLock for "embed-certs-572724"
	I1017 20:07:35.066451  468306 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:07:35.066461  468306 fix.go:54] fixHost starting: 
	I1017 20:07:35.066728  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:35.083842  468306 fix.go:112] recreateIfNeeded on embed-certs-572724: state=Stopped err=<nil>
	W1017 20:07:35.083876  468306 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:07:35.087229  468306 out.go:252] * Restarting existing docker container for "embed-certs-572724" ...
	I1017 20:07:35.087395  468306 cli_runner.go:164] Run: docker start embed-certs-572724
	I1017 20:07:35.362644  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:35.387680  468306 kic.go:430] container "embed-certs-572724" state is running.
	I1017 20:07:35.388077  468306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-572724
	I1017 20:07:35.413020  468306 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/config.json ...
	I1017 20:07:35.413244  468306 machine.go:93] provisionDockerMachine start ...
	I1017 20:07:35.413312  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:35.431537  468306 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:35.432020  468306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1017 20:07:35.432035  468306 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:07:35.434112  468306 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:07:38.583975  468306 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-572724
	
	I1017 20:07:38.584011  468306 ubuntu.go:182] provisioning hostname "embed-certs-572724"
	I1017 20:07:38.584095  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:38.601157  468306 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:38.601469  468306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1017 20:07:38.601485  468306 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-572724 && echo "embed-certs-572724" | sudo tee /etc/hostname
	I1017 20:07:38.767296  468306 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-572724
	
	I1017 20:07:38.767389  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:38.789437  468306 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:38.789745  468306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1017 20:07:38.789763  468306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-572724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-572724/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-572724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:07:38.936611  468306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:07:38.936636  468306 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:07:38.936674  468306 ubuntu.go:190] setting up certificates
	I1017 20:07:38.936683  468306 provision.go:84] configureAuth start
	I1017 20:07:38.936746  468306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-572724
	I1017 20:07:38.954771  468306 provision.go:143] copyHostCerts
	I1017 20:07:38.954845  468306 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:07:38.954860  468306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:07:38.954944  468306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:07:38.955050  468306 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:07:38.955059  468306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:07:38.955090  468306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:07:38.955160  468306 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:07:38.955170  468306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:07:38.955197  468306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:07:38.955261  468306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.embed-certs-572724 san=[127.0.0.1 192.168.85.2 embed-certs-572724 localhost minikube]
	I1017 20:07:39.036084  468306 provision.go:177] copyRemoteCerts
	I1017 20:07:39.036147  468306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:07:39.036193  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.053325  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:39.156328  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:07:39.177364  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:07:39.196184  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:07:39.216327  468306 provision.go:87] duration metric: took 279.60015ms to configureAuth
	I1017 20:07:39.216354  468306 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:07:39.216584  468306 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:39.216697  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.234368  468306 main.go:141] libmachine: Using SSH client type: native
	I1017 20:07:39.234691  468306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I1017 20:07:39.234712  468306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:07:39.563336  468306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:07:39.563359  468306 machine.go:96] duration metric: took 4.150105597s to provisionDockerMachine
	I1017 20:07:39.563370  468306 start.go:293] postStartSetup for "embed-certs-572724" (driver="docker")
	I1017 20:07:39.563381  468306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:07:39.563437  468306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:07:39.563483  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.586650  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:39.692733  468306 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:07:39.696074  468306 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:07:39.696103  468306 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:07:39.696115  468306 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:07:39.696179  468306 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:07:39.696257  468306 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:07:39.696374  468306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:07:39.703888  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:07:39.721381  468306 start.go:296] duration metric: took 157.995475ms for postStartSetup
	I1017 20:07:39.721476  468306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:07:39.721514  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.739405  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:39.846239  468306 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:07:39.851490  468306 fix.go:56] duration metric: took 4.785022586s for fixHost
	I1017 20:07:39.851517  468306 start.go:83] releasing machines lock for "embed-certs-572724", held for 4.78507719s
	I1017 20:07:39.851628  468306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-572724
	I1017 20:07:39.868975  468306 ssh_runner.go:195] Run: cat /version.json
	I1017 20:07:39.869028  468306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:07:39.869033  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.869092  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:39.889175  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:39.902745  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:39.996299  468306 ssh_runner.go:195] Run: systemctl --version
	I1017 20:07:40.093138  468306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:07:40.140073  468306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:07:40.145181  468306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:07:40.145284  468306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:07:40.154101  468306 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:07:40.154127  468306 start.go:495] detecting cgroup driver to use...
	I1017 20:07:40.154172  468306 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:07:40.154247  468306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:07:40.169608  468306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:07:40.183869  468306 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:07:40.183982  468306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:07:40.201197  468306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:07:40.214894  468306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:07:40.341351  468306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:07:40.457323  468306 docker.go:234] disabling docker service ...
	I1017 20:07:40.457385  468306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:07:40.472311  468306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:07:40.485665  468306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:07:40.596436  468306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:07:40.716557  468306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:07:40.729776  468306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:07:40.743649  468306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:07:40.743759  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.754371  468306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:07:40.754462  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.764674  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.773549  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.782659  468306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:07:40.791361  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.800389  468306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.808904  468306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:07:40.817640  468306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:07:40.825469  468306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:07:40.832858  468306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:07:40.950594  468306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:07:41.083720  468306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:07:41.083874  468306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:07:41.088061  468306 start.go:563] Will wait 60s for crictl version
	I1017 20:07:41.088129  468306 ssh_runner.go:195] Run: which crictl
	I1017 20:07:41.091915  468306 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:07:41.117178  468306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:07:41.117336  468306 ssh_runner.go:195] Run: crio --version
	I1017 20:07:41.152730  468306 ssh_runner.go:195] Run: crio --version
	I1017 20:07:41.189528  468306 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:07:41.192362  468306 cli_runner.go:164] Run: docker network inspect embed-certs-572724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:07:41.208095  468306 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:07:41.212019  468306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:07:41.221837  468306 kubeadm.go:883] updating cluster {Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:07:41.221959  468306 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:07:41.222010  468306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:07:41.257308  468306 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:07:41.257334  468306 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:07:41.257387  468306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:07:41.284455  468306 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:07:41.284481  468306 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:07:41.284489  468306 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 20:07:41.284613  468306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-572724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:07:41.284697  468306 ssh_runner.go:195] Run: crio config
	I1017 20:07:41.352110  468306 cni.go:84] Creating CNI manager for ""
	I1017 20:07:41.352178  468306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:07:41.352213  468306 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:07:41.352269  468306 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-572724 NodeName:embed-certs-572724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:07:41.352428  468306 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-572724"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:07:41.352550  468306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:07:41.359826  468306 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:07:41.359922  468306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:07:41.367044  468306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 20:07:41.380084  468306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:07:41.398543  468306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1017 20:07:41.411621  468306 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:07:41.415055  468306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:07:41.425036  468306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:07:41.544269  468306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:07:41.560988  468306 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724 for IP: 192.168.85.2
	I1017 20:07:41.561018  468306 certs.go:195] generating shared ca certs ...
	I1017 20:07:41.561039  468306 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:41.561184  468306 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:07:41.561235  468306 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:07:41.561246  468306 certs.go:257] generating profile certs ...
	I1017 20:07:41.561340  468306 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/client.key
	I1017 20:07:41.561413  468306 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.key.5b851251
	I1017 20:07:41.561459  468306 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.key
	I1017 20:07:41.561592  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:07:41.561633  468306 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:07:41.561644  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:07:41.561675  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:07:41.561711  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:07:41.561736  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:07:41.561789  468306 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:07:41.562427  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:07:41.586475  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:07:41.604935  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:07:41.627823  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:07:41.650972  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 20:07:41.671270  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:07:41.693047  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:07:41.711639  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/embed-certs-572724/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:07:41.731254  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:07:41.763659  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:07:41.793525  468306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:07:41.811457  468306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:07:41.828088  468306 ssh_runner.go:195] Run: openssl version
	I1017 20:07:41.834843  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:07:41.843485  468306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:07:41.847660  468306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:07:41.847734  468306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:07:41.892058  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:07:41.903266  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:07:41.913779  468306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:07:41.919293  468306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:07:41.919362  468306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:07:41.964158  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:07:41.973474  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:07:41.982365  468306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:07:41.986420  468306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:07:41.986514  468306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:07:42.037049  468306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:07:42.046394  468306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:07:42.050769  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:07:42.099256  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:07:42.156028  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:07:42.229300  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:07:42.306281  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:07:42.367886  468306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:07:42.434746  468306 kubeadm.go:400] StartCluster: {Name:embed-certs-572724 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-572724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:42.434847  468306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:07:42.434918  468306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:07:42.515664  468306 cri.go:89] found id: "711a3fa869605d5a18b3f9781975225dfdd63bf72d85af3b2ba7101a28d13528"
	I1017 20:07:42.515689  468306 cri.go:89] found id: "e224a6e5eb1ca81a4a48fbcc8536252f742bddc7bc1c3afbd37a26b29ac8c998"
	I1017 20:07:42.515694  468306 cri.go:89] found id: "0c97fc08388e70c856c936895f529c1a760925d708cce00a9944a4dd9c8d36a3"
	I1017 20:07:42.515707  468306 cri.go:89] found id: "2e90f4799ad4c01480d7887c5d52c632cc0dc3dea6d59784485224961e8a45af"
	I1017 20:07:42.515711  468306 cri.go:89] found id: ""
	I1017 20:07:42.515764  468306 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:07:42.538976  468306 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:07:42Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:07:42.539057  468306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:07:42.555063  468306 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:07:42.555086  468306 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:07:42.555154  468306 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:07:42.565492  468306 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:07:42.566201  468306 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-572724" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:42.566497  468306 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-572724" cluster setting kubeconfig missing "embed-certs-572724" context setting]
	I1017 20:07:42.567043  468306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:42.568912  468306 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:07:42.583510  468306 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1017 20:07:42.583585  468306 kubeadm.go:601] duration metric: took 28.491827ms to restartPrimaryControlPlane
	I1017 20:07:42.583611  468306 kubeadm.go:402] duration metric: took 148.876776ms to StartCluster
	I1017 20:07:42.583651  468306 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:42.583739  468306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:42.585118  468306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:42.585690  468306 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:42.585852  468306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:07:42.585939  468306 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-572724"
	I1017 20:07:42.585955  468306 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-572724"
	W1017 20:07:42.585961  468306 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:07:42.585982  468306 host.go:66] Checking if "embed-certs-572724" exists ...
	I1017 20:07:42.586493  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:42.586674  468306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:07:42.587077  468306 addons.go:69] Setting dashboard=true in profile "embed-certs-572724"
	I1017 20:07:42.587100  468306 addons.go:238] Setting addon dashboard=true in "embed-certs-572724"
	W1017 20:07:42.587107  468306 addons.go:247] addon dashboard should already be in state true
	I1017 20:07:42.587129  468306 host.go:66] Checking if "embed-certs-572724" exists ...
	I1017 20:07:42.587203  468306 addons.go:69] Setting default-storageclass=true in profile "embed-certs-572724"
	I1017 20:07:42.587226  468306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-572724"
	I1017 20:07:42.587543  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:42.587548  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:42.602770  468306 out.go:179] * Verifying Kubernetes components...
	I1017 20:07:42.606155  468306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:07:42.641130  468306 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:07:42.644449  468306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:07:42.644469  468306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:07:42.644578  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:42.652630  468306 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 20:07:42.655593  468306 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 20:07:42.660604  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 20:07:42.660631  468306 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 20:07:42.660708  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:42.685947  468306 addons.go:238] Setting addon default-storageclass=true in "embed-certs-572724"
	W1017 20:07:42.685973  468306 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:07:42.685996  468306 host.go:66] Checking if "embed-certs-572724" exists ...
	I1017 20:07:42.686424  468306 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:07:42.719027  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:42.726711  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:42.744973  468306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:07:42.744998  468306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:07:42.745065  468306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:07:42.776342  468306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:07:42.964385  468306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:07:42.999578  468306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:07:43.026235  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 20:07:43.026260  468306 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 20:07:43.101579  468306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:07:43.116145  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 20:07:43.116208  468306 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 20:07:43.240163  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 20:07:43.240185  468306 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 20:07:43.349237  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 20:07:43.349258  468306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 20:07:43.417780  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 20:07:43.417801  468306 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 20:07:43.447636  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 20:07:43.447658  468306 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 20:07:43.478259  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 20:07:43.478286  468306 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 20:07:43.508624  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 20:07:43.508651  468306 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 20:07:43.537990  468306 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:07:43.538017  468306 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 20:07:43.557738  468306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 17 20:07:31 no-preload-413711 crio[649]: time="2025-10-17T20:07:31.985778875Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=0d7a9e8d-bf4a-4a53-9148-8d80470f626e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:31 no-preload-413711 crio[649]: time="2025-10-17T20:07:31.989912004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.012931013Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.016300151Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.037610595Z" level=info msg="Created container 64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=0d7a9e8d-bf4a-4a53-9148-8d80470f626e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.03872766Z" level=info msg="Starting container: 64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42" id=af1f44fe-13be-4d61-af3f-c4b3aa8b717a name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.044663275Z" level=info msg="Started container" PID=1624 containerID=64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper id=af1f44fe-13be-4d61-af3f-c4b3aa8b717a name=/runtime.v1.RuntimeService/StartContainer sandboxID=45a730cbae2755baadd8a3a1827987a9cf8d4927434b1f82e745d3140e823f34
	Oct 17 20:07:32 no-preload-413711 conmon[1622]: conmon 64af5fceeeafdefcc6c0 <ninfo>: container 1624 exited with status 1
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.984512639Z" level=info msg="Removing container: e882223772e76094cdb3b872f5f2ab97060adcf67c968d12247fd25c2a1a47c1" id=55567c4e-7792-4d2b-8d1b-37096f72ee03 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.992074752Z" level=info msg="Error loading conmon cgroup of container e882223772e76094cdb3b872f5f2ab97060adcf67c968d12247fd25c2a1a47c1: cgroup deleted" id=55567c4e-7792-4d2b-8d1b-37096f72ee03 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:32 no-preload-413711 crio[649]: time="2025-10-17T20:07:32.998512556Z" level=info msg="Removed container e882223772e76094cdb3b872f5f2ab97060adcf67c968d12247fd25c2a1a47c1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=55567c4e-7792-4d2b-8d1b-37096f72ee03 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.108247639Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=69f48064-53a4-45ac-9be0-fccdbf2294a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.113995657Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cc934f01-95fb-4ac4-964e-0407ce6d1cb9 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.122828242Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=f59a35c3-182e-4eda-a31a-879a0f860737 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.123141628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.162168984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.163025855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.219878795Z" level=info msg="Created container c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=f59a35c3-182e-4eda-a31a-879a0f860737 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.221333606Z" level=info msg="Starting container: c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535" id=2242ac62-7434-4f20-ad8a-a2f00a56c3ab name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:07:42 no-preload-413711 crio[649]: time="2025-10-17T20:07:42.2290189Z" level=info msg="Started container" PID=1642 containerID=c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper id=2242ac62-7434-4f20-ad8a-a2f00a56c3ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=45a730cbae2755baadd8a3a1827987a9cf8d4927434b1f82e745d3140e823f34
	Oct 17 20:07:42 no-preload-413711 conmon[1640]: conmon c6750f7e08419a1ec1ff <ninfo>: container 1642 exited with status 1
	Oct 17 20:07:43 no-preload-413711 crio[649]: time="2025-10-17T20:07:43.016606631Z" level=info msg="Removing container: 64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42" id=bb449ff1-ed37-4947-824f-2f0e69b6411f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:43 no-preload-413711 crio[649]: time="2025-10-17T20:07:43.028834561Z" level=info msg="Error loading conmon cgroup of container 64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42: cgroup deleted" id=bb449ff1-ed37-4947-824f-2f0e69b6411f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:43 no-preload-413711 crio[649]: time="2025-10-17T20:07:43.032081725Z" level=info msg="Removed container 64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9/dashboard-metrics-scraper" id=bb449ff1-ed37-4947-824f-2f0e69b6411f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:07:50 no-preload-413711 conmon[1137]: conmon 414b28f4d238e57f8f8c <ninfo>: container 1140 exited with status 1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c6750f7e08419       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   2                   45a730cbae275       dashboard-metrics-scraper-6ffb444bf9-2v5z9   kubernetes-dashboard
	329671f140367       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   24 seconds ago      Running             kubernetes-dashboard        0                   7bce548b88fd7       kubernetes-dashboard-855c9754f9-s7s2d        kubernetes-dashboard
	0b04752f912a2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           31 seconds ago      Running             coredns                     1                   dde3440e59d93       coredns-66bc5c9577-4bslb                     kube-system
	564831d0d0018       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           31 seconds ago      Running             busybox                     1                   69afa65e98ed3       busybox                                      default
	414b28f4d238e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           31 seconds ago      Exited              storage-provisioner         1                   e9b770955adb9       storage-provisioner                          kube-system
	4b27d4265c1b5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           31 seconds ago      Running             kube-proxy                  1                   35790673a9627       kube-proxy-kl48k                             kube-system
	d55286cae1115       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           31 seconds ago      Running             kindnet-cni                 1                   c9c6bd1798f94       kindnet-7jkvq                                kube-system
	deaac6f262625       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           38 seconds ago      Running             kube-controller-manager     1                   19a82b0c8db07       kube-controller-manager-no-preload-413711    kube-system
	d3cbad8ffb593       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           38 seconds ago      Running             kube-apiserver              1                   b6bc3b8d65923       kube-apiserver-no-preload-413711             kube-system
	c38dce9b2ac32       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           38 seconds ago      Running             kube-scheduler              1                   79ff4e4190b6f       kube-scheduler-no-preload-413711             kube-system
	36109bb4bd5f6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           38 seconds ago      Running             etcd                        1                   e6bbb0a03025b       etcd-no-preload-413711                       kube-system
	
	
	==> coredns [0b04752f912a24f05f3f174f5e038d1bc5c741985152901f520b685c1af6ae22] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47414 - 8795 "HINFO IN 8918335285238813650.351980678888439187. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013189742s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-413711
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-413711
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=no-preload-413711
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_06_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:06:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-413711
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:07:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:07:38 +0000   Fri, 17 Oct 2025 20:06:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:07:38 +0000   Fri, 17 Oct 2025 20:06:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:07:38 +0000   Fri, 17 Oct 2025 20:06:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:07:38 +0000   Fri, 17 Oct 2025 20:06:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-413711
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b8affef4-ca65-41f6-ac3b-b82ba141b1e4
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 coredns-66bc5c9577-4bslb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     90s
	  kube-system                 etcd-no-preload-413711                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         98s
	  kube-system                 kindnet-7jkvq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-no-preload-413711              250m (12%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-no-preload-413711     200m (10%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-proxy-kl48k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-no-preload-413711              100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2v5z9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s7s2d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 88s                  kube-proxy       
	  Normal   Starting                 30s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  106s (x8 over 106s)  kubelet          Node no-preload-413711 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x8 over 106s)  kubelet          Node no-preload-413711 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x8 over 106s)  kubelet          Node no-preload-413711 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    95s                  kubelet          Node no-preload-413711 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 95s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  95s                  kubelet          Node no-preload-413711 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     95s                  kubelet          Node no-preload-413711 status is now: NodeHasSufficientPID
	  Normal   Starting                 95s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           91s                  node-controller  Node no-preload-413711 event: Registered Node no-preload-413711 in Controller
	  Normal   NodeReady                74s                  kubelet          Node no-preload-413711 status is now: NodeReady
	  Normal   Starting                 40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  40s (x8 over 40s)    kubelet          Node no-preload-413711 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    40s (x8 over 40s)    kubelet          Node no-preload-413711 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     40s (x8 over 40s)    kubelet          Node no-preload-413711 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                  node-controller  Node no-preload-413711 event: Registered Node no-preload-413711 in Controller
	
	
	==> dmesg <==
	[Oct17 19:43] overlayfs: idmapped layers are currently not supported
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	[ +30.002292] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [36109bb4bd5f615a7a96ed9755d97a57c974349fd49cb42b98be4765efc30f76] <==
	{"level":"warn","ts":"2025-10-17T20:07:16.343146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.365571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.381248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.416701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.439400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.449517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.468754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.506241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.517238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.545953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.555446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.573223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.595332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.608508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.628600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.645407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.663882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.690305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.704670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.719267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.738135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.762724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.798372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.807271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:16.896728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44654","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:07:51 up  2:50,  0 user,  load average: 9.12, 5.15, 3.45
	Linux no-preload-413711 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d55286cae111582ea5afb451068692f46116af2dac4163dd91775155dacabc95] <==
	I1017 20:07:19.943156       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:07:19.944097       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:07:19.944272       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:07:19.944322       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:07:19.944364       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:07:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:07:20.206796       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:07:20.206872       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:07:20.206905       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:07:20.207674       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:07:50.207191       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:07:50.207340       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 20:07:50.207430       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 20:07:50.208698       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	
	
	==> kube-apiserver [d3cbad8ffb59387c5fb4641f605385ffcb3d1293c2dbeb606812de21a7dbfcbe] <==
	I1017 20:07:18.083869       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:07:18.083929       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:07:18.105118       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:07:18.140595       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:07:18.140956       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:07:18.140966       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:07:18.141459       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:07:18.141915       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:07:18.156221       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:07:18.156502       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:07:18.156935       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:07:18.160064       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:07:18.169385       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1017 20:07:18.207380       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:07:18.550764       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:07:18.827141       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:07:18.882387       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:07:18.935661       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:07:18.949755       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:07:18.962454       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:07:19.052345       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.134.138"}
	I1017 20:07:19.067909       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.230.36"}
	I1017 20:07:21.634587       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:07:21.689044       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:07:21.878930       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [deaac6f262625b4a8323f78d4de40fa760609f9d1fb3c2272664be7f075fd5a4] <==
	I1017 20:07:21.273079       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:07:21.275887       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:07:21.276126       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:07:21.276158       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:07:21.277999       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:07:21.279832       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:07:21.279990       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:07:21.287650       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:07:21.295422       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:07:21.300649       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:07:21.300707       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:07:21.300730       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:07:21.300735       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:07:21.300741       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:07:21.313204       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 20:07:21.313296       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:07:21.313390       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-413711"
	I1017 20:07:21.313443       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:07:21.313986       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:07:21.318836       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:07:21.324799       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:07:21.325952       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:07:21.327091       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:07:21.330475       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:07:21.909434       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [4b27d4265c1b55ad8100a7b68272549e8702d789cd0b676fe143e1ba72d3e73f] <==
	I1017 20:07:20.196073       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:07:20.518271       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:07:20.620669       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:07:20.620790       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:07:20.620915       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:07:20.647443       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:07:20.647561       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:07:20.653381       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:07:20.654494       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:07:20.654955       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:20.656280       1 config.go:200] "Starting service config controller"
	I1017 20:07:20.656388       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:07:20.656432       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:07:20.656462       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:07:20.656497       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:07:20.656550       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:07:20.657203       1 config.go:309] "Starting node config controller"
	I1017 20:07:20.661812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:07:20.661873       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:07:20.757438       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:07:20.757481       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:07:20.757518       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c38dce9b2ac325e84a1349d8c32881acb0b877b98f49fe5fd6e22a8ed8a5df1b] <==
	W1017 20:07:17.822630       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:07:17.825427       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:07:17.825455       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:07:17.825480       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:07:17.970813       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:07:17.970846       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:17.988821       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:07:17.989280       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:17.989483       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:17.989317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 20:07:18.006038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:07:18.047915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:07:18.048058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:07:18.048154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:07:18.048221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:07:18.048289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:07:18.049287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:07:18.053964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:07:18.054054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:07:18.054128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:07:18.054196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:07:18.054247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:07:18.054296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:07:18.054451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1017 20:07:18.091248       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:07:18 no-preload-413711 kubelet[767]: E1017 20:07:18.992077     767 projected.go:196] Error preparing data for projected volume kube-api-access-cl96m for pod kube-system/kindnet-7jkvq: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:no-preload-413711" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-413711' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Oct 17 20:07:18 no-preload-413711 kubelet[767]: E1017 20:07:18.992106     767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a848c0df-632d-4733-9f76-1ed315cae3be-kube-api-access-cl96m podName:a848c0df-632d-4733-9f76-1ed315cae3be nodeName:}" failed. No retries permitted until 2025-10-17 20:07:19.492099829 +0000 UTC m=+7.849524286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cl96m" (UniqueName: "kubernetes.io/projected/a848c0df-632d-4733-9f76-1ed315cae3be-kube-api-access-cl96m") pod "kindnet-7jkvq" (UID: "a848c0df-632d-4733-9f76-1ed315cae3be") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:no-preload-413711" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-413711' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Oct 17 20:07:18 no-preload-413711 kubelet[767]: E1017 20:07:18.992123     767 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 17 20:07:18 no-preload-413711 kubelet[767]: E1017 20:07:18.992132     767 projected.go:196] Error preparing data for projected volume kube-api-access-p4vpl for pod default/busybox: [failed to fetch token: serviceaccounts "default" is forbidden: User "system:node:no-preload-413711" cannot create resource "serviceaccounts/token" in API group "" in the namespace "default": no relationship found between node 'no-preload-413711' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Oct 17 20:07:18 no-preload-413711 kubelet[767]: E1017 20:07:18.992156     767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8776954-7870-4b04-a178-bc73c09ccec1-kube-api-access-p4vpl podName:e8776954-7870-4b04-a178-bc73c09ccec1 nodeName:}" failed. No retries permitted until 2025-10-17 20:07:19.492149895 +0000 UTC m=+7.849574352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p4vpl" (UniqueName: "kubernetes.io/projected/e8776954-7870-4b04-a178-bc73c09ccec1-kube-api-access-p4vpl") pod "busybox" (UID: "e8776954-7870-4b04-a178-bc73c09ccec1") : [failed to fetch token: serviceaccounts "default" is forbidden: User "system:node:no-preload-413711" cannot create resource "serviceaccounts/token" in API group "" in the namespace "default": no relationship found between node 'no-preload-413711' and this object, failed to sync configmap cache: timed out waiting for the condition]
	Oct 17 20:07:19 no-preload-413711 kubelet[767]: I1017 20:07:19.525210     767 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:07:21 no-preload-413711 kubelet[767]: I1017 20:07:21.831989     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c1c9f0ad-711b-4d30-8118-7bf18df1e175-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2v5z9\" (UID: \"c1c9f0ad-711b-4d30-8118-7bf18df1e175\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9"
	Oct 17 20:07:21 no-preload-413711 kubelet[767]: I1017 20:07:21.832048     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp8x9\" (UniqueName: \"kubernetes.io/projected/c1c9f0ad-711b-4d30-8118-7bf18df1e175-kube-api-access-fp8x9\") pod \"dashboard-metrics-scraper-6ffb444bf9-2v5z9\" (UID: \"c1c9f0ad-711b-4d30-8118-7bf18df1e175\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9"
	Oct 17 20:07:21 no-preload-413711 kubelet[767]: I1017 20:07:21.932567     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92v2\" (UniqueName: \"kubernetes.io/projected/4c45f2f1-d92a-465c-84fd-c82ef9c49fda-kube-api-access-q92v2\") pod \"kubernetes-dashboard-855c9754f9-s7s2d\" (UID: \"4c45f2f1-d92a-465c-84fd-c82ef9c49fda\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s7s2d"
	Oct 17 20:07:21 no-preload-413711 kubelet[767]: I1017 20:07:21.932672     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4c45f2f1-d92a-465c-84fd-c82ef9c49fda-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-s7s2d\" (UID: \"4c45f2f1-d92a-465c-84fd-c82ef9c49fda\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s7s2d"
	Oct 17 20:07:22 no-preload-413711 kubelet[767]: W1017 20:07:22.173479     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b7258d1208d439b01c28c0b9cffbc08144edd9dba361ded5c67dc59f9d48f892/crio-45a730cbae2755baadd8a3a1827987a9cf8d4927434b1f82e745d3140e823f34 WatchSource:0}: Error finding container 45a730cbae2755baadd8a3a1827987a9cf8d4927434b1f82e745d3140e823f34: Status 404 returned error can't find the container with id 45a730cbae2755baadd8a3a1827987a9cf8d4927434b1f82e745d3140e823f34
	Oct 17 20:07:30 no-preload-413711 kubelet[767]: I1017 20:07:30.144086     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s7s2d" podStartSLOduration=4.499381478 podStartE2EDuration="9.144065186s" podCreationTimestamp="2025-10-17 20:07:21 +0000 UTC" firstStartedPulling="2025-10-17 20:07:22.191647867 +0000 UTC m=+10.549072324" lastFinishedPulling="2025-10-17 20:07:26.836331493 +0000 UTC m=+15.193756032" observedRunningTime="2025-10-17 20:07:26.984693772 +0000 UTC m=+15.342118221" watchObservedRunningTime="2025-10-17 20:07:30.144065186 +0000 UTC m=+18.501489635"
	Oct 17 20:07:31 no-preload-413711 kubelet[767]: I1017 20:07:31.977210     767 scope.go:117] "RemoveContainer" containerID="e882223772e76094cdb3b872f5f2ab97060adcf67c968d12247fd25c2a1a47c1"
	Oct 17 20:07:32 no-preload-413711 kubelet[767]: I1017 20:07:32.982331     767 scope.go:117] "RemoveContainer" containerID="e882223772e76094cdb3b872f5f2ab97060adcf67c968d12247fd25c2a1a47c1"
	Oct 17 20:07:32 no-preload-413711 kubelet[767]: I1017 20:07:32.982872     767 scope.go:117] "RemoveContainer" containerID="64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42"
	Oct 17 20:07:32 no-preload-413711 kubelet[767]: E1017 20:07:32.983075     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2v5z9_kubernetes-dashboard(c1c9f0ad-711b-4d30-8118-7bf18df1e175)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9" podUID="c1c9f0ad-711b-4d30-8118-7bf18df1e175"
	Oct 17 20:07:33 no-preload-413711 kubelet[767]: I1017 20:07:33.986799     767 scope.go:117] "RemoveContainer" containerID="64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42"
	Oct 17 20:07:33 no-preload-413711 kubelet[767]: E1017 20:07:33.986955     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2v5z9_kubernetes-dashboard(c1c9f0ad-711b-4d30-8118-7bf18df1e175)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9" podUID="c1c9f0ad-711b-4d30-8118-7bf18df1e175"
	Oct 17 20:07:42 no-preload-413711 kubelet[767]: I1017 20:07:42.106753     767 scope.go:117] "RemoveContainer" containerID="64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42"
	Oct 17 20:07:43 no-preload-413711 kubelet[767]: I1017 20:07:43.012430     767 scope.go:117] "RemoveContainer" containerID="64af5fceeeafdefcc6c0d6cd5aedf95c8ac586d654a71e610c256fd19a669e42"
	Oct 17 20:07:43 no-preload-413711 kubelet[767]: I1017 20:07:43.013226     767 scope.go:117] "RemoveContainer" containerID="c6750f7e08419a1ec1ff38425fa3b1f58a501ae0bbd19213da48188848f35535"
	Oct 17 20:07:43 no-preload-413711 kubelet[767]: E1017 20:07:43.013524     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2v5z9_kubernetes-dashboard(c1c9f0ad-711b-4d30-8118-7bf18df1e175)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2v5z9" podUID="c1c9f0ad-711b-4d30-8118-7bf18df1e175"
	Oct 17 20:07:44 no-preload-413711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:07:44 no-preload-413711 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:07:44 no-preload-413711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [329671f140367b4adca0adf47c66ff41df81a98a236ca514bc725e0955b7dd09] <==
	2025/10/17 20:07:26 Using namespace: kubernetes-dashboard
	2025/10/17 20:07:26 Using in-cluster config to connect to apiserver
	2025/10/17 20:07:26 Using secret token for csrf signing
	2025/10/17 20:07:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:07:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:07:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:07:26 Generating JWE encryption key
	2025/10/17 20:07:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:07:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:07:27 Initializing JWE encryption key from synchronized object
	2025/10/17 20:07:27 Creating in-cluster Sidecar client
	2025/10/17 20:07:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:07:27 Serving insecurely on HTTP port: 9090
	2025/10/17 20:07:26 Starting overwatch
	
	
	==> storage-provisioner [414b28f4d238e57f8f8c4dee16996a2aed70a51a943c5c03f048a67ec51f0bfd] <==
	I1017 20:07:19.997162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:07:50.130518       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-413711 -n no-preload-413711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-413711 -n no-preload-413711: exit status 2 (532.969402ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-413711 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-572724 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-572724 --alsologtostderr -v=1: exit status 80 (2.00308237s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-572724 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:08:38.773511  474017 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:08:38.773696  474017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:38.773723  474017 out.go:374] Setting ErrFile to fd 2...
	I1017 20:08:38.773744  474017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:38.774037  474017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:08:38.774383  474017 out.go:368] Setting JSON to false
	I1017 20:08:38.774431  474017 mustload.go:65] Loading cluster: embed-certs-572724
	I1017 20:08:38.774893  474017 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:08:38.775407  474017 cli_runner.go:164] Run: docker container inspect embed-certs-572724 --format={{.State.Status}}
	I1017 20:08:38.800610  474017 host.go:66] Checking if "embed-certs-572724" exists ...
	I1017 20:08:38.800936  474017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:08:38.862429  474017 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 20:08:38.852707186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:08:38.863085  474017 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-572724 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:08:38.866582  474017 out.go:179] * Pausing node embed-certs-572724 ... 
	I1017 20:08:38.869335  474017 host.go:66] Checking if "embed-certs-572724" exists ...
	I1017 20:08:38.869675  474017 ssh_runner.go:195] Run: systemctl --version
	I1017 20:08:38.869725  474017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-572724
	I1017 20:08:38.887847  474017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/embed-certs-572724/id_rsa Username:docker}
	I1017 20:08:38.991570  474017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:08:39.022294  474017 pause.go:52] kubelet running: true
	I1017 20:08:39.022382  474017 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:08:39.337466  474017 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:08:39.337577  474017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:08:39.451493  474017 cri.go:89] found id: "b339d56587d6d45e174144f0a9270220b632eb089d17efc47aa29734ab8aa116"
	I1017 20:08:39.451563  474017 cri.go:89] found id: "616441d046923f26d47dc809cc6b9e4d2928b5f8fe7cdb708bd4cc510cc8b27e"
	I1017 20:08:39.451591  474017 cri.go:89] found id: "c6e6947cc661c44f39b176dbf73fa36646f4d009c8d033da09d40f50914d3312"
	I1017 20:08:39.451609  474017 cri.go:89] found id: "f2819e934092e79c2bb65da9f76d0f0615b9efa4dae95114b34ceb074d2f63b2"
	I1017 20:08:39.451630  474017 cri.go:89] found id: "bd8bdd7d12816cda744332cb3b34ffb8e05940de2f7dada91b4a4b21564e0d39"
	I1017 20:08:39.451670  474017 cri.go:89] found id: "711a3fa869605d5a18b3f9781975225dfdd63bf72d85af3b2ba7101a28d13528"
	I1017 20:08:39.451687  474017 cri.go:89] found id: "e224a6e5eb1ca81a4a48fbcc8536252f742bddc7bc1c3afbd37a26b29ac8c998"
	I1017 20:08:39.451706  474017 cri.go:89] found id: "0c97fc08388e70c856c936895f529c1a760925d708cce00a9944a4dd9c8d36a3"
	I1017 20:08:39.451726  474017 cri.go:89] found id: "2e90f4799ad4c01480d7887c5d52c632cc0dc3dea6d59784485224961e8a45af"
	I1017 20:08:39.451761  474017 cri.go:89] found id: "e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717"
	I1017 20:08:39.451780  474017 cri.go:89] found id: "3fc1fbe7031f4ac9b13cdb2127e2a107fca355c0213ff06b195a73131962e39d"
	I1017 20:08:39.451800  474017 cri.go:89] found id: ""
	I1017 20:08:39.451876  474017 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:08:39.464981  474017 retry.go:31] will retry after 127.25255ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:08:39Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:08:39.593321  474017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:08:39.608961  474017 pause.go:52] kubelet running: false
	I1017 20:08:39.609053  474017 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:08:39.858749  474017 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:08:39.858861  474017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:08:39.935983  474017 cri.go:89] found id: "b339d56587d6d45e174144f0a9270220b632eb089d17efc47aa29734ab8aa116"
	I1017 20:08:39.936062  474017 cri.go:89] found id: "616441d046923f26d47dc809cc6b9e4d2928b5f8fe7cdb708bd4cc510cc8b27e"
	I1017 20:08:39.936091  474017 cri.go:89] found id: "c6e6947cc661c44f39b176dbf73fa36646f4d009c8d033da09d40f50914d3312"
	I1017 20:08:39.936110  474017 cri.go:89] found id: "f2819e934092e79c2bb65da9f76d0f0615b9efa4dae95114b34ceb074d2f63b2"
	I1017 20:08:39.936139  474017 cri.go:89] found id: "bd8bdd7d12816cda744332cb3b34ffb8e05940de2f7dada91b4a4b21564e0d39"
	I1017 20:08:39.936162  474017 cri.go:89] found id: "711a3fa869605d5a18b3f9781975225dfdd63bf72d85af3b2ba7101a28d13528"
	I1017 20:08:39.936181  474017 cri.go:89] found id: "e224a6e5eb1ca81a4a48fbcc8536252f742bddc7bc1c3afbd37a26b29ac8c998"
	I1017 20:08:39.936200  474017 cri.go:89] found id: "0c97fc08388e70c856c936895f529c1a760925d708cce00a9944a4dd9c8d36a3"
	I1017 20:08:39.936220  474017 cri.go:89] found id: "2e90f4799ad4c01480d7887c5d52c632cc0dc3dea6d59784485224961e8a45af"
	I1017 20:08:39.936253  474017 cri.go:89] found id: "e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717"
	I1017 20:08:39.936279  474017 cri.go:89] found id: "3fc1fbe7031f4ac9b13cdb2127e2a107fca355c0213ff06b195a73131962e39d"
	I1017 20:08:39.936299  474017 cri.go:89] found id: ""
	I1017 20:08:39.936382  474017 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:08:39.949153  474017 retry.go:31] will retry after 388.264485ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:08:39Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:08:40.337801  474017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:08:40.351868  474017 pause.go:52] kubelet running: false
	I1017 20:08:40.351947  474017 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:08:40.583503  474017 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:08:40.583630  474017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:08:40.675341  474017 cri.go:89] found id: "b339d56587d6d45e174144f0a9270220b632eb089d17efc47aa29734ab8aa116"
	I1017 20:08:40.675415  474017 cri.go:89] found id: "616441d046923f26d47dc809cc6b9e4d2928b5f8fe7cdb708bd4cc510cc8b27e"
	I1017 20:08:40.675434  474017 cri.go:89] found id: "c6e6947cc661c44f39b176dbf73fa36646f4d009c8d033da09d40f50914d3312"
	I1017 20:08:40.675456  474017 cri.go:89] found id: "f2819e934092e79c2bb65da9f76d0f0615b9efa4dae95114b34ceb074d2f63b2"
	I1017 20:08:40.675489  474017 cri.go:89] found id: "bd8bdd7d12816cda744332cb3b34ffb8e05940de2f7dada91b4a4b21564e0d39"
	I1017 20:08:40.675514  474017 cri.go:89] found id: "711a3fa869605d5a18b3f9781975225dfdd63bf72d85af3b2ba7101a28d13528"
	I1017 20:08:40.675533  474017 cri.go:89] found id: "e224a6e5eb1ca81a4a48fbcc8536252f742bddc7bc1c3afbd37a26b29ac8c998"
	I1017 20:08:40.675552  474017 cri.go:89] found id: "0c97fc08388e70c856c936895f529c1a760925d708cce00a9944a4dd9c8d36a3"
	I1017 20:08:40.675572  474017 cri.go:89] found id: "2e90f4799ad4c01480d7887c5d52c632cc0dc3dea6d59784485224961e8a45af"
	I1017 20:08:40.675604  474017 cri.go:89] found id: "e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717"
	I1017 20:08:40.675630  474017 cri.go:89] found id: "3fc1fbe7031f4ac9b13cdb2127e2a107fca355c0213ff06b195a73131962e39d"
	I1017 20:08:40.675650  474017 cri.go:89] found id: ""
	I1017 20:08:40.675731  474017 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:08:40.697639  474017 out.go:203] 
	W1017 20:08:40.700676  474017 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:08:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:08:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:08:40.700704  474017 out.go:285] * 
	* 
	W1017 20:08:40.709228  474017 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:08:40.712945  474017 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-572724 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-572724
helpers_test.go:243: (dbg) docker inspect embed-certs-572724:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e",
	        "Created": "2025-10-17T20:05:49.604188435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 468432,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:07:35.121148435Z",
	            "FinishedAt": "2025-10-17T20:07:34.144323376Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/hosts",
	        "LogPath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e-json.log",
	        "Name": "/embed-certs-572724",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-572724:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-572724",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e",
	                "LowerDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-572724",
	                "Source": "/var/lib/docker/volumes/embed-certs-572724/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-572724",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-572724",
	                "name.minikube.sigs.k8s.io": "embed-certs-572724",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9520b0333d59035ca2a9dd8ed87a1f0db75cc5d2fc6e774fb16fd06822c793a5",
	            "SandboxKey": "/var/run/docker/netns/9520b0333d59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-572724": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:14:71:c7:5a:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1786ab454405791896f6daa543404507b38480aaf90e1b61a39fa7a7767ad3ab",
	                    "EndpointID": "b8e590f4e6cd92cb3c0689020f37a921b5756727b4b3bc176027f0e93e27c90c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-572724",
	                        "6c48c7c23063"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-572724 -n embed-certs-572724
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-572724 -n embed-certs-572724: exit status 2 (602.549321ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-572724 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-572724 logs -n 25: (1.849199066s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ image   │ old-k8s-version-135652 image list --format=json                                                                                                                                                                                               │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ pause   │ -p old-k8s-version-135652 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │                     │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164379       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:06 UTC │
	│ delete  │ -p cert-expiration-164379                                                                                                                                                                                                                     │ cert-expiration-164379       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p no-preload-413711 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p no-preload-413711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ stop    │ -p embed-certs-572724 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-572724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ image   │ no-preload-413711 image list --format=json                                                                                                                                                                                                    │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ pause   │ -p no-preload-413711 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-672422                                                                                                                                                                                                               │ disable-driver-mounts-672422 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ image   │ embed-certs-572724 image list --format=json                                                                                                                                                                                                   │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ pause   │ -p embed-certs-572724 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:07:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:07:56.130484  471476 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:07:56.130630  471476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:56.130643  471476 out.go:374] Setting ErrFile to fd 2...
	I1017 20:07:56.130648  471476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:56.130946  471476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:07:56.131408  471476 out.go:368] Setting JSON to false
	I1017 20:07:56.132484  471476 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10227,"bootTime":1760721449,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:07:56.132596  471476 start.go:141] virtualization:  
	I1017 20:07:56.136430  471476 out.go:179] * [default-k8s-diff-port-740780] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:07:56.139632  471476 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:07:56.139676  471476 notify.go:220] Checking for updates...
	I1017 20:07:56.145728  471476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:07:56.148734  471476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:56.151653  471476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:07:56.154631  471476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:07:56.157535  471476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:07:56.161045  471476 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:56.161211  471476 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:07:56.191265  471476 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:07:56.191389  471476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:56.255038  471476 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 20:07:56.245369353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:56.255154  471476 docker.go:318] overlay module found
	I1017 20:07:56.258516  471476 out.go:179] * Using the docker driver based on user configuration
	I1017 20:07:56.261426  471476 start.go:305] selected driver: docker
	I1017 20:07:56.261449  471476 start.go:925] validating driver "docker" against <nil>
	I1017 20:07:56.261470  471476 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:07:56.262302  471476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:56.317447  471476 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 20:07:56.30744766 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:56.317615  471476 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:07:56.317856  471476 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:07:56.320950  471476 out.go:179] * Using Docker driver with root privileges
	I1017 20:07:56.323789  471476 cni.go:84] Creating CNI manager for ""
	I1017 20:07:56.323858  471476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:07:56.323870  471476 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:07:56.323943  471476 start.go:349] cluster config:
	{Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:56.328846  471476 out.go:179] * Starting "default-k8s-diff-port-740780" primary control-plane node in "default-k8s-diff-port-740780" cluster
	I1017 20:07:56.331667  471476 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:07:56.334623  471476 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:07:56.337502  471476 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:07:56.337562  471476 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:07:56.337577  471476 cache.go:58] Caching tarball of preloaded images
	I1017 20:07:56.337587  471476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:07:56.337659  471476 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:07:56.337669  471476 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:07:56.337786  471476 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/config.json ...
	I1017 20:07:56.337807  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/config.json: {Name:mkc8368c13a19534d51dd5675e2c2c5fbe4b66d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:56.358840  471476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:07:56.358865  471476 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:07:56.358884  471476 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:07:56.358957  471476 start.go:360] acquireMachinesLock for default-k8s-diff-port-740780: {Name:mkb4281c63cf8ac1be83a7647fdf1335968a6b70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:56.359109  471476 start.go:364] duration metric: took 130.745µs to acquireMachinesLock for "default-k8s-diff-port-740780"
	I1017 20:07:56.359140  471476 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:07:56.359259  471476 start.go:125] createHost starting for "" (driver="docker")
	W1017 20:07:56.361240  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:07:58.361303  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:07:56.362830  471476 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:07:56.363053  471476 start.go:159] libmachine.API.Create for "default-k8s-diff-port-740780" (driver="docker")
	I1017 20:07:56.363105  471476 client.go:168] LocalClient.Create starting
	I1017 20:07:56.363905  471476 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem
	I1017 20:07:56.363950  471476 main.go:141] libmachine: Decoding PEM data...
	I1017 20:07:56.363966  471476 main.go:141] libmachine: Parsing certificate...
	I1017 20:07:56.364384  471476 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem
	I1017 20:07:56.364422  471476 main.go:141] libmachine: Decoding PEM data...
	I1017 20:07:56.364434  471476 main.go:141] libmachine: Parsing certificate...
	I1017 20:07:56.364908  471476 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-740780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:07:56.380571  471476 cli_runner.go:211] docker network inspect default-k8s-diff-port-740780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:07:56.380653  471476 network_create.go:284] running [docker network inspect default-k8s-diff-port-740780] to gather additional debugging logs...
	I1017 20:07:56.380681  471476 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-740780
	W1017 20:07:56.396739  471476 cli_runner.go:211] docker network inspect default-k8s-diff-port-740780 returned with exit code 1
	I1017 20:07:56.396778  471476 network_create.go:287] error running [docker network inspect default-k8s-diff-port-740780]: docker network inspect default-k8s-diff-port-740780: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-740780 not found
	I1017 20:07:56.396793  471476 network_create.go:289] output of [docker network inspect default-k8s-diff-port-740780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-740780 not found
	
	** /stderr **
	I1017 20:07:56.396889  471476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:07:56.413488  471476 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f667d9c3ea2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:fc:1d:c6:d2:da} reservation:<nil>}
	I1017 20:07:56.413763  471476 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-82a22734829b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:5a:78:c5:e0:0a} reservation:<nil>}
	I1017 20:07:56.414111  471476 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b88bd3b523f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:75:74:cd:15:9b} reservation:<nil>}
	I1017 20:07:56.414545  471476 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0c2a0}
	I1017 20:07:56.414568  471476 network_create.go:124] attempt to create docker network default-k8s-diff-port-740780 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1017 20:07:56.414625  471476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-740780 default-k8s-diff-port-740780
	I1017 20:07:56.480726  471476 network_create.go:108] docker network default-k8s-diff-port-740780 192.168.76.0/24 created
	I1017 20:07:56.480758  471476 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-740780" container
	I1017 20:07:56.480854  471476 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:07:56.498178  471476 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-740780 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-740780 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:07:56.524270  471476 oci.go:103] Successfully created a docker volume default-k8s-diff-port-740780
	I1017 20:07:56.524379  471476 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-740780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-740780 --entrypoint /usr/bin/test -v default-k8s-diff-port-740780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:07:57.139001  471476 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-740780
	I1017 20:07:57.139048  471476 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:07:57.139067  471476 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:07:57.139209  471476 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-740780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 20:08:00.437494  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:02.859463  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:08:03.227642  471476 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-740780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.08838585s)
	I1017 20:08:03.227672  471476 kic.go:203] duration metric: took 6.088601049s to extract preloaded images to volume ...
	W1017 20:08:03.227816  471476 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 20:08:03.227927  471476 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:08:03.294002  471476 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-740780 --name default-k8s-diff-port-740780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-740780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-740780 --network default-k8s-diff-port-740780 --ip 192.168.76.2 --volume default-k8s-diff-port-740780:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:08:03.805508  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Running}}
	I1017 20:08:03.836656  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:03.863379  471476 cli_runner.go:164] Run: docker exec default-k8s-diff-port-740780 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:08:03.934792  471476 oci.go:144] the created container "default-k8s-diff-port-740780" has a running status.
	I1017 20:08:03.934827  471476 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa...
	I1017 20:08:05.851759  471476 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:08:05.879668  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:05.904452  471476 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:08:05.904477  471476 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-740780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:08:05.955559  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:05.981818  471476 machine.go:93] provisionDockerMachine start ...
	I1017 20:08:05.981912  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:06.004157  471476 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:06.004682  471476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I1017 20:08:06.004700  471476 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:08:06.005670  471476 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1017 20:08:04.870892  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:07.362039  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:09.366370  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:08:09.168644  471476 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-740780
	
	I1017 20:08:09.168666  471476 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-740780"
	I1017 20:08:09.168729  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:09.195815  471476 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:09.196132  471476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I1017 20:08:09.196145  471476 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-740780 && echo "default-k8s-diff-port-740780" | sudo tee /etc/hostname
	I1017 20:08:09.359547  471476 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-740780
	
	I1017 20:08:09.359620  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:09.379371  471476 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:09.379711  471476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I1017 20:08:09.379729  471476 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-740780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-740780/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-740780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:08:09.532825  471476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:08:09.532871  471476 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:08:09.532893  471476 ubuntu.go:190] setting up certificates
	I1017 20:08:09.532902  471476 provision.go:84] configureAuth start
	I1017 20:08:09.532965  471476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-740780
	I1017 20:08:09.550500  471476 provision.go:143] copyHostCerts
	I1017 20:08:09.550565  471476 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:08:09.550576  471476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:08:09.550652  471476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:08:09.550739  471476 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:08:09.550745  471476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:08:09.550768  471476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:08:09.550818  471476 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:08:09.550823  471476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:08:09.550845  471476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:08:09.550889  471476 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-740780 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-740780 localhost minikube]
	I1017 20:08:09.912917  471476 provision.go:177] copyRemoteCerts
	I1017 20:08:09.912984  471476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:08:09.913035  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:09.930982  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:10.037530  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:08:10.057354  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1017 20:08:10.076382  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:08:10.098841  471476 provision.go:87] duration metric: took 565.923647ms to configureAuth
	I1017 20:08:10.098876  471476 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:08:10.099099  471476 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:08:10.099230  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:10.121230  471476 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:10.124488  471476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I1017 20:08:10.124546  471476 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:08:10.484226  471476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:08:10.484281  471476 machine.go:96] duration metric: took 4.502440537s to provisionDockerMachine
	I1017 20:08:10.484307  471476 client.go:171] duration metric: took 14.121191487s to LocalClient.Create
	I1017 20:08:10.484356  471476 start.go:167] duration metric: took 14.121302113s to libmachine.API.Create "default-k8s-diff-port-740780"
	I1017 20:08:10.484379  471476 start.go:293] postStartSetup for "default-k8s-diff-port-740780" (driver="docker")
	I1017 20:08:10.484404  471476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:08:10.484498  471476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:08:10.484599  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:10.501824  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:10.604870  471476 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:08:10.609426  471476 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:08:10.609456  471476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:08:10.609468  471476 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:08:10.609544  471476 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:08:10.609631  471476 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:08:10.609738  471476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:08:10.617698  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:08:10.636354  471476 start.go:296] duration metric: took 151.944566ms for postStartSetup
	I1017 20:08:10.636784  471476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-740780
	I1017 20:08:10.653853  471476 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/config.json ...
	I1017 20:08:10.654154  471476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:08:10.654211  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:10.671051  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:10.773458  471476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:08:10.778327  471476 start.go:128] duration metric: took 14.41905032s to createHost
	I1017 20:08:10.778362  471476 start.go:83] releasing machines lock for "default-k8s-diff-port-740780", held for 14.419240493s
	I1017 20:08:10.778470  471476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-740780
	I1017 20:08:10.796807  471476 ssh_runner.go:195] Run: cat /version.json
	I1017 20:08:10.796859  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:10.796866  471476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:08:10.796927  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:10.815679  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:10.820385  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:11.020151  471476 ssh_runner.go:195] Run: systemctl --version
	I1017 20:08:11.026822  471476 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:08:11.061990  471476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:08:11.066532  471476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:08:11.066676  471476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:08:11.095325  471476 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 20:08:11.095351  471476 start.go:495] detecting cgroup driver to use...
	I1017 20:08:11.095406  471476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:08:11.095479  471476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:08:11.114767  471476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:08:11.133206  471476 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:08:11.133309  471476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:08:11.153924  471476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:08:11.174100  471476 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:08:11.306397  471476 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:08:11.447880  471476 docker.go:234] disabling docker service ...
	I1017 20:08:11.448017  471476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:08:11.470150  471476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:08:11.485325  471476 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:08:11.600871  471476 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:08:11.711969  471476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:08:11.725875  471476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:08:11.740728  471476 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:08:11.740835  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.750217  471476 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:08:11.750301  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.759114  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.767799  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.778890  471476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:08:11.787617  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.796501  471476 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.813193  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.822753  471476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:08:11.830294  471476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:08:11.838099  471476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:08:11.964044  471476 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:08:12.288699  471476 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:08:12.288776  471476 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:08:12.293890  471476 start.go:563] Will wait 60s for crictl version
	I1017 20:08:12.293992  471476 ssh_runner.go:195] Run: which crictl
	I1017 20:08:12.298634  471476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:08:12.325899  471476 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:08:12.326005  471476 ssh_runner.go:195] Run: crio --version
	I1017 20:08:12.355928  471476 ssh_runner.go:195] Run: crio --version
	I1017 20:08:12.391246  471476 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1017 20:08:11.860577  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:14.361081  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:08:12.394105  471476 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-740780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:08:12.411072  471476 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 20:08:12.415468  471476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:08:12.424967  471476 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:08:12.425082  471476 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:08:12.425139  471476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:08:12.461920  471476 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:08:12.461944  471476 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:08:12.462002  471476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:08:12.487289  471476 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:08:12.487315  471476 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:08:12.487324  471476 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1017 20:08:12.487410  471476 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-740780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:08:12.487494  471476 ssh_runner.go:195] Run: crio config
	I1017 20:08:12.553162  471476 cni.go:84] Creating CNI manager for ""
	I1017 20:08:12.553188  471476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:08:12.553205  471476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:08:12.553228  471476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-740780 NodeName:default-k8s-diff-port-740780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:08:12.553358  471476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-740780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:08:12.553438  471476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:08:12.565451  471476 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:08:12.565524  471476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:08:12.574553  471476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 20:08:12.587897  471476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:08:12.601751  471476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1017 20:08:12.615377  471476 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:08:12.618841  471476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:08:12.628376  471476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:08:12.744492  471476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:08:12.760368  471476 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780 for IP: 192.168.76.2
	I1017 20:08:12.760387  471476 certs.go:195] generating shared ca certs ...
	I1017 20:08:12.760402  471476 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:12.760613  471476 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:08:12.760659  471476 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:08:12.760667  471476 certs.go:257] generating profile certs ...
	I1017 20:08:12.760721  471476 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.key
	I1017 20:08:12.760732  471476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt with IP's: []
	I1017 20:08:13.283368  471476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt ...
	I1017 20:08:13.283402  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: {Name:mkdcfa98906e44150f55d463818efda9144d9a82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:13.283602  471476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.key ...
	I1017 20:08:13.283621  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.key: {Name:mk4850b021ac99e7073fadd55c4842af8142c277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:13.283721  471476 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key.79d0c2c9
	I1017 20:08:13.283741  471476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt.79d0c2c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1017 20:08:14.582058  471476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt.79d0c2c9 ...
	I1017 20:08:14.582092  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt.79d0c2c9: {Name:mk9b6998f2a4fe254f3f17cfb7afa631ef0192cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:14.582344  471476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key.79d0c2c9 ...
	I1017 20:08:14.582364  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key.79d0c2c9: {Name:mkc471f426f45c198cb2f26b3488174f65aae5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:14.582510  471476 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt.79d0c2c9 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt
	I1017 20:08:14.582621  471476 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key.79d0c2c9 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key
	I1017 20:08:14.582688  471476 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key
	I1017 20:08:14.582711  471476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.crt with IP's: []
	I1017 20:08:15.735954  471476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.crt ...
	I1017 20:08:15.735985  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.crt: {Name:mk69b2b16cddbbfea363d64b0c07e981e5ba15fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:15.736173  471476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key ...
	I1017 20:08:15.736224  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key: {Name:mk82a2a7362a1acefc9ebfc6b1ca0c874cff93d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:15.736427  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:08:15.736475  471476 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:08:15.736489  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:08:15.736513  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:08:15.736562  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:08:15.736591  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:08:15.736642  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:08:15.737261  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:08:15.755871  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:08:15.775053  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:08:15.799705  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:08:15.819800  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:08:15.839532  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:08:15.860010  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:08:15.878687  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:08:15.895733  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:08:15.917814  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:08:15.935680  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:08:15.953223  471476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:08:15.965806  471476 ssh_runner.go:195] Run: openssl version
	I1017 20:08:15.971953  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:08:15.980011  471476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:08:15.983673  471476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:08:15.983738  471476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:08:16.025264  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:08:16.034352  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:08:16.042940  471476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:08:16.046971  471476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:08:16.047079  471476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:08:16.088217  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:08:16.096634  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:08:16.105151  471476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:08:16.109004  471476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:08:16.109074  471476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:08:16.150224  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:08:16.158332  471476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:08:16.161837  471476 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:08:16.161891  471476 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:08:16.161964  471476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:08:16.162021  471476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:08:16.188822  471476 cri.go:89] found id: ""
	I1017 20:08:16.188901  471476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:08:16.196785  471476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:08:16.209092  471476 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:08:16.209204  471476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:08:16.217695  471476 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:08:16.217757  471476 kubeadm.go:157] found existing configuration files:
	
	I1017 20:08:16.217823  471476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1017 20:08:16.225697  471476 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:08:16.225781  471476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:08:16.233284  471476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1017 20:08:16.240900  471476 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:08:16.240973  471476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:08:16.248894  471476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1017 20:08:16.256616  471476 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:08:16.256709  471476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:08:16.264195  471476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1017 20:08:16.272200  471476 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:08:16.272264  471476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:08:16.280088  471476 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:08:16.324174  471476 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:08:16.324240  471476 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:08:16.357384  471476 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:08:16.357459  471476 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 20:08:16.357503  471476 kubeadm.go:318] OS: Linux
	I1017 20:08:16.357555  471476 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:08:16.357605  471476 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 20:08:16.357654  471476 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:08:16.357704  471476 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:08:16.357754  471476 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:08:16.357805  471476 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:08:16.357852  471476 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:08:16.357907  471476 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:08:16.357960  471476 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 20:08:16.431190  471476 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:08:16.431415  471476 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:08:16.431569  471476 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:08:16.439133  471476 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1017 20:08:16.860887  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:18.861307  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:08:16.444395  471476 out.go:252]   - Generating certificates and keys ...
	I1017 20:08:16.444581  471476 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:08:16.444679  471476 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:08:17.191151  471476 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:08:17.269311  471476 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:08:17.573364  471476 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:08:18.461369  471476 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:08:18.723488  471476 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:08:18.723736  471476 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-740780 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 20:08:18.988867  471476 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:08:18.989174  471476 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-740780 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 20:08:19.479053  471476 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:08:20.549373  471476 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	W1017 20:08:20.861473  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:23.361902  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:08:22.104364  471476 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:08:22.104704  471476 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:08:23.520635  471476 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:08:24.062218  471476 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:08:24.429027  471476 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:08:24.891370  471476 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:08:25.475152  471476 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:08:25.475971  471476 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:08:25.478568  471476 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:08:25.481915  471476 out.go:252]   - Booting up control plane ...
	I1017 20:08:25.482019  471476 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:08:25.482106  471476 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:08:25.482185  471476 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:08:25.505687  471476 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:08:25.505808  471476 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:08:25.514215  471476 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:08:25.514537  471476 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:08:25.514779  471476 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:08:25.656978  471476 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:08:25.661908  471476 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:08:25.359729  468306 pod_ready.go:94] pod "coredns-66bc5c9577-q9n55" is "Ready"
	I1017 20:08:25.359803  468306 pod_ready.go:86] duration metric: took 33.005558762s for pod "coredns-66bc5c9577-q9n55" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.362573  468306 pod_ready.go:83] waiting for pod "etcd-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.367239  468306 pod_ready.go:94] pod "etcd-embed-certs-572724" is "Ready"
	I1017 20:08:25.367255  468306 pod_ready.go:86] duration metric: took 4.665006ms for pod "etcd-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.369418  468306 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.373508  468306 pod_ready.go:94] pod "kube-apiserver-embed-certs-572724" is "Ready"
	I1017 20:08:25.373559  468306 pod_ready.go:86] duration metric: took 4.125337ms for pod "kube-apiserver-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.375609  468306 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.559095  468306 pod_ready.go:94] pod "kube-controller-manager-embed-certs-572724" is "Ready"
	I1017 20:08:25.559127  468306 pod_ready.go:86] duration metric: took 183.479261ms for pod "kube-controller-manager-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.757437  468306 pod_ready.go:83] waiting for pod "kube-proxy-2jxkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:26.159210  468306 pod_ready.go:94] pod "kube-proxy-2jxkk" is "Ready"
	I1017 20:08:26.159297  468306 pod_ready.go:86] duration metric: took 401.816788ms for pod "kube-proxy-2jxkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:26.358105  468306 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:26.758460  468306 pod_ready.go:94] pod "kube-scheduler-embed-certs-572724" is "Ready"
	I1017 20:08:26.758485  468306 pod_ready.go:86] duration metric: took 400.298437ms for pod "kube-scheduler-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:26.758499  468306 pod_ready.go:40] duration metric: took 34.410781325s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:08:26.877389  468306 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:08:26.880455  468306 out.go:179] * Done! kubectl is now configured to use "embed-certs-572724" cluster and "default" namespace by default
	I1017 20:08:28.163237  471476 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.501429763s
	I1017 20:08:28.166885  471476 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:08:28.166984  471476 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1017 20:08:28.167077  471476 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:08:28.167160  471476 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 20:08:30.957099  471476 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.789628271s
	I1017 20:08:32.473922  471476 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.307035154s
	I1017 20:08:34.168512  471476 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001505197s
	I1017 20:08:34.191329  471476 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:08:34.214350  471476 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:08:34.227790  471476 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:08:34.228037  471476 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-740780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:08:34.245096  471476 kubeadm.go:318] [bootstrap-token] Using token: 6bl1gy.fzpcm8t5vlrraadh
	I1017 20:08:34.248272  471476 out.go:252]   - Configuring RBAC rules ...
	I1017 20:08:34.248409  471476 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:08:34.252995  471476 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:08:34.263465  471476 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:08:34.267767  471476 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:08:34.272312  471476 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:08:34.276356  471476 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:08:34.576889  471476 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:08:35.025049  471476 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:08:35.575142  471476 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:08:35.576400  471476 kubeadm.go:318] 
	I1017 20:08:35.576474  471476 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:08:35.576480  471476 kubeadm.go:318] 
	I1017 20:08:35.576596  471476 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:08:35.576604  471476 kubeadm.go:318] 
	I1017 20:08:35.576629  471476 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:08:35.576690  471476 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:08:35.576743  471476 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:08:35.576747  471476 kubeadm.go:318] 
	I1017 20:08:35.576803  471476 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:08:35.576808  471476 kubeadm.go:318] 
	I1017 20:08:35.576858  471476 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:08:35.576862  471476 kubeadm.go:318] 
	I1017 20:08:35.576923  471476 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:08:35.577002  471476 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:08:35.577073  471476 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:08:35.577083  471476 kubeadm.go:318] 
	I1017 20:08:35.577171  471476 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:08:35.577252  471476 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:08:35.577256  471476 kubeadm.go:318] 
	I1017 20:08:35.577343  471476 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 6bl1gy.fzpcm8t5vlrraadh \
	I1017 20:08:35.577451  471476 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 \
	I1017 20:08:35.577472  471476 kubeadm.go:318] 	--control-plane 
	I1017 20:08:35.577476  471476 kubeadm.go:318] 
	I1017 20:08:35.577564  471476 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:08:35.577569  471476 kubeadm.go:318] 
	I1017 20:08:35.577654  471476 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 6bl1gy.fzpcm8t5vlrraadh \
	I1017 20:08:35.577760  471476 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 
	I1017 20:08:35.580892  471476 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 20:08:35.581125  471476 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 20:08:35.581234  471476 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:08:35.581250  471476 cni.go:84] Creating CNI manager for ""
	I1017 20:08:35.581258  471476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:08:35.584504  471476 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:08:35.587408  471476 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:08:35.591818  471476 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:08:35.591841  471476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:08:35.605693  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:08:35.908506  471476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:08:35.908671  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:35.908758  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-740780 minikube.k8s.io/updated_at=2025_10_17T20_08_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=default-k8s-diff-port-740780 minikube.k8s.io/primary=true
	I1017 20:08:36.119224  471476 ops.go:34] apiserver oom_adj: -16
	I1017 20:08:36.119362  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:36.619582  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:37.119516  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:37.620443  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:38.120143  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:38.619459  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:39.120319  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:39.620315  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:40.119398  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:40.619941  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:40.830186  471476 kubeadm.go:1113] duration metric: took 4.921566276s to wait for elevateKubeSystemPrivileges
	I1017 20:08:40.830212  471476 kubeadm.go:402] duration metric: took 24.668325763s to StartCluster
	I1017 20:08:40.830228  471476 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:40.830291  471476 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:08:40.831855  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:40.832083  471476 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:08:40.832465  471476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:08:40.832679  471476 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:08:40.832762  471476 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-740780"
	I1017 20:08:40.832776  471476 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-740780"
	I1017 20:08:40.832800  471476 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:08:40.832838  471476 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:08:40.832877  471476 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-740780"
	I1017 20:08:40.832888  471476 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-740780"
	I1017 20:08:40.833220  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:40.833262  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:40.844337  471476 out.go:179] * Verifying Kubernetes components...
	I1017 20:08:40.847326  471476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:08:40.891266  471476 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-740780"
	I1017 20:08:40.891315  471476 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:08:40.891778  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:40.908202  471476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:08:40.911129  471476 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:08:40.911154  471476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:08:40.911217  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:40.971166  471476 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:08:40.971193  471476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:08:40.971257  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:40.982114  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:41.007068  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.797549022Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=743a8558-6ce6-4ac8-8024-a37f70c8a33e name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.799077809Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4494190b-2e83-4d54-952a-e575c909607c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.800098721Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k/dashboard-metrics-scraper" id=e9bd0d80-4b2b-4a14-bc52-098508595228 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.800327334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.810376203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.812915835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.849477315Z" level=info msg="Created container e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k/dashboard-metrics-scraper" id=e9bd0d80-4b2b-4a14-bc52-098508595228 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.850583221Z" level=info msg="Starting container: e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717" id=e8dceeaa-61f2-46cc-bba0-d0a482391f64 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.852395357Z" level=info msg="Started container" PID=1663 containerID=e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k/dashboard-metrics-scraper id=e8dceeaa-61f2-46cc-bba0-d0a482391f64 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0553efe778e12f4a5596685af577687a94aac33ea614ce1f7c2bd412ffcaffe2
	Oct 17 20:08:30 embed-certs-572724 conmon[1661]: conmon e09ff9a6ff0e54673acb <ninfo>: container 1663 exited with status 1
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.183641803Z" level=info msg="Removing container: 2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1" id=9bdac920-802e-47c8-89a8-757e647cbc9a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.191681626Z" level=info msg="Error loading conmon cgroup of container 2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1: cgroup deleted" id=9bdac920-802e-47c8-89a8-757e647cbc9a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.197988114Z" level=info msg="Removed container 2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k/dashboard-metrics-scraper" id=9bdac920-802e-47c8-89a8-757e647cbc9a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.407931069Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.413358016Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.413527948Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.413606715Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.419950674Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.41998648Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.420003621Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.428151732Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.428187859Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.428213039Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.431333152Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.431518484Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e09ff9a6ff0e5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago       Exited              dashboard-metrics-scraper   2                   0553efe778e12       dashboard-metrics-scraper-6ffb444bf9-8zr5k   kubernetes-dashboard
	b339d56587d6d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago       Running             storage-provisioner         2                   c8baa579adc34       storage-provisioner                          kube-system
	3fc1fbe7031f4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   db5e8dc78a58b       kubernetes-dashboard-855c9754f9-2gxq5        kubernetes-dashboard
	1bd164b19059d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   265c02a6702c1       busybox                                      default
	616441d046923       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   3418026f9340b       kindnet-cg6w6                                kube-system
	c6e6947cc661c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago       Running             coredns                     1                   b78a39df92670       coredns-66bc5c9577-q9n55                     kube-system
	f2819e934092e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago       Running             kube-proxy                  1                   0fd7a1e937c36       kube-proxy-2jxkk                             kube-system
	bd8bdd7d12816       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago       Exited              storage-provisioner         1                   c8baa579adc34       storage-provisioner                          kube-system
	711a3fa869605       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   91a1be632a717       kube-controller-manager-embed-certs-572724   kube-system
	e224a6e5eb1ca       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   a22127695ef2a       kube-apiserver-embed-certs-572724            kube-system
	0c97fc08388e7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   acaa92dc160da       kube-scheduler-embed-certs-572724            kube-system
	2e90f4799ad4c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   462ae833db130       etcd-embed-certs-572724                      kube-system
	
	
	==> coredns [c6e6947cc661c44f39b176dbf73fa36646f4d009c8d033da09d40f50914d3312] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52533 - 16189 "HINFO IN 8006110790023923564.2537412446222084605. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003645143s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-572724
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-572724
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=embed-certs-572724
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_06_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:06:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-572724
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:08:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:08:20 +0000   Fri, 17 Oct 2025 20:06:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:08:20 +0000   Fri, 17 Oct 2025 20:06:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:08:20 +0000   Fri, 17 Oct 2025 20:06:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:08:20 +0000   Fri, 17 Oct 2025 20:07:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-572724
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                20557b6e-804a-45ff-a381-36f74b0f1294
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-q9n55                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-572724                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-cg6w6                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-572724             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-572724    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-2jxkk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-572724             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8zr5k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2gxq5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node embed-certs-572724 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node embed-certs-572724 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node embed-certs-572724 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node embed-certs-572724 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node embed-certs-572724 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node embed-certs-572724 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node embed-certs-572724 event: Registered Node embed-certs-572724 in Controller
	  Normal   NodeReady                96s                    kubelet          Node embed-certs-572724 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node embed-certs-572724 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node embed-certs-572724 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node embed-certs-572724 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node embed-certs-572724 event: Registered Node embed-certs-572724 in Controller
	
	
	==> dmesg <==
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	[ +30.002292] overlayfs: idmapped layers are currently not supported
	[Oct17 20:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2e90f4799ad4c01480d7887c5d52c632cc0dc3dea6d59784485224961e8a45af] <==
	{"level":"warn","ts":"2025-10-17T20:07:46.277600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.307692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.340717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.383141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.415929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.455141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.487313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.532141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.587890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.634345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.668329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.697176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.736498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.752331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.783406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.873687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.889172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.979325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.985691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.014079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.042418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.077798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.100200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.124914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.291464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47008","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:08:42 up  2:51,  0 user,  load average: 5.67, 4.88, 3.45
	Linux embed-certs-572724 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [616441d046923f26d47dc809cc6b9e4d2928b5f8fe7cdb708bd4cc510cc8b27e] <==
	I1017 20:07:51.200282       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:07:51.213274       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 20:07:51.213471       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:07:51.213485       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:07:51.213499       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:07:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:07:51.407620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:07:51.407689       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:07:51.407721       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:07:51.408991       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:08:21.408298       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 20:08:21.408432       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 20:08:21.408589       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:08:21.409538       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1017 20:08:22.608169       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:08:22.608211       1 metrics.go:72] Registering metrics
	I1017 20:08:22.608273       1 controller.go:711] "Syncing nftables rules"
	I1017 20:08:31.407404       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:08:31.407461       1 main.go:301] handling current node
	I1017 20:08:41.408665       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:08:41.408699       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e224a6e5eb1ca81a4a48fbcc8536252f742bddc7bc1c3afbd37a26b29ac8c998] <==
	I1017 20:07:49.564641       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:07:49.564768       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 20:07:49.564988       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:07:49.576813       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:07:49.576843       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:07:49.576850       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:07:49.576857       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:07:49.579111       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:07:49.579972       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:07:49.580060       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:07:49.619040       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:07:49.648783       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:07:49.655810       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:07:49.767946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1017 20:07:49.840289       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:07:50.010902       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:07:51.333858       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:07:51.613590       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:07:51.806443       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:07:51.849060       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:07:52.141000       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.35.189"}
	I1017 20:07:52.232362       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.32.142"}
	I1017 20:07:54.541036       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:07:54.666637       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:07:54.777695       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [711a3fa869605d5a18b3f9781975225dfdd63bf72d85af3b2ba7101a28d13528] <==
	I1017 20:07:54.128367       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:07:54.131054       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:07:54.131077       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:07:54.131084       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:07:54.131175       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:07:54.133423       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:07:54.135993       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:07:54.139044       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:07:54.141171       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:07:54.142595       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:07:54.148755       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:07:54.151095       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:07:54.152136       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:07:54.158403       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:07:54.169667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:07:54.176040       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:07:54.176144       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:07:54.176169       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:07:54.176263       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:07:54.177173       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:07:54.178273       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:07:54.180624       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:07:54.180732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:07:54.813986       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1017 20:07:54.814068       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [f2819e934092e79c2bb65da9f76d0f0615b9efa4dae95114b34ceb074d2f63b2] <==
	I1017 20:07:52.276926       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:07:52.426252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:07:52.526344       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:07:52.526448       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 20:07:52.526542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:07:52.556387       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:07:52.556573       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:07:52.569079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:07:52.569478       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:07:52.569700       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:52.570975       1 config.go:200] "Starting service config controller"
	I1017 20:07:52.571093       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:07:52.571143       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:07:52.571170       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:07:52.571205       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:07:52.571232       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:07:52.571897       1 config.go:309] "Starting node config controller"
	I1017 20:07:52.575463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:07:52.575565       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:07:52.671736       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:07:52.671830       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:07:52.671850       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0c97fc08388e70c856c936895f529c1a760925d708cce00a9944a4dd9c8d36a3] <==
	I1017 20:07:48.378068       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:07:52.303546       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:07:52.303944       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:52.315567       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:07:52.315662       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:07:52.315687       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:07:52.315720       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:07:52.335560       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:52.335646       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:52.335695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:07:52.335727       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:07:52.416003       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 20:07:52.436686       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:52.436628       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:07:55 embed-certs-572724 kubelet[770]: E1017 20:07:55.873508     770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dbe89ec1-24d2-4266-baf3-18b8fe7a333f-kube-api-access-6w7fb podName:dbe89ec1-24d2-4266-baf3-18b8fe7a333f nodeName:}" failed. No retries permitted until 2025-10-17 20:07:56.373475868 +0000 UTC m=+14.808274875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6w7fb" (UniqueName: "kubernetes.io/projected/dbe89ec1-24d2-4266-baf3-18b8fe7a333f-kube-api-access-6w7fb") pod "dashboard-metrics-scraper-6ffb444bf9-8zr5k" (UID: "dbe89ec1-24d2-4266-baf3-18b8fe7a333f") : failed to sync configmap cache: timed out waiting for the condition
	Oct 17 20:07:55 embed-certs-572724 kubelet[770]: E1017 20:07:55.883805     770 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 17 20:07:55 embed-certs-572724 kubelet[770]: E1017 20:07:55.883856     770 projected.go:196] Error preparing data for projected volume kube-api-access-lwpgw for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2gxq5: failed to sync configmap cache: timed out waiting for the condition
	Oct 17 20:07:55 embed-certs-572724 kubelet[770]: E1017 20:07:55.883928     770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5070c655-fe42-4815-a448-d8d4f574d03a-kube-api-access-lwpgw podName:5070c655-fe42-4815-a448-d8d4f574d03a nodeName:}" failed. No retries permitted until 2025-10-17 20:07:56.38390644 +0000 UTC m=+14.818705447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lwpgw" (UniqueName: "kubernetes.io/projected/5070c655-fe42-4815-a448-d8d4f574d03a-kube-api-access-lwpgw") pod "kubernetes-dashboard-855c9754f9-2gxq5" (UID: "5070c655-fe42-4815-a448-d8d4f574d03a") : failed to sync configmap cache: timed out waiting for the condition
	Oct 17 20:07:56 embed-certs-572724 kubelet[770]: W1017 20:07:56.557175     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/crio-db5e8dc78a58b07f5ee00bb673a5a988c893b543be6048b6bc49bec5241cf883 WatchSource:0}: Error finding container db5e8dc78a58b07f5ee00bb673a5a988c893b543be6048b6bc49bec5241cf883: Status 404 returned error can't find the container with id db5e8dc78a58b07f5ee00bb673a5a988c893b543be6048b6bc49bec5241cf883
	Oct 17 20:07:56 embed-certs-572724 kubelet[770]: W1017 20:07:56.579451     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/crio-0553efe778e12f4a5596685af577687a94aac33ea614ce1f7c2bd412ffcaffe2 WatchSource:0}: Error finding container 0553efe778e12f4a5596685af577687a94aac33ea614ce1f7c2bd412ffcaffe2: Status 404 returned error can't find the container with id 0553efe778e12f4a5596685af577687a94aac33ea614ce1f7c2bd412ffcaffe2
	Oct 17 20:08:09 embed-certs-572724 kubelet[770]: I1017 20:08:09.113586     770 scope.go:117] "RemoveContainer" containerID="a818a573bcb4067329de1d8d710f6bd33600aac6dc19092c04354c15ce05f211"
	Oct 17 20:08:09 embed-certs-572724 kubelet[770]: I1017 20:08:09.135968     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2gxq5" podStartSLOduration=8.343845766 podStartE2EDuration="15.135950104s" podCreationTimestamp="2025-10-17 20:07:54 +0000 UTC" firstStartedPulling="2025-10-17 20:07:56.574395991 +0000 UTC m=+15.009195006" lastFinishedPulling="2025-10-17 20:08:03.366500329 +0000 UTC m=+21.801299344" observedRunningTime="2025-10-17 20:08:04.116381132 +0000 UTC m=+22.551180246" watchObservedRunningTime="2025-10-17 20:08:09.135950104 +0000 UTC m=+27.570749119"
	Oct 17 20:08:10 embed-certs-572724 kubelet[770]: I1017 20:08:10.120043     770 scope.go:117] "RemoveContainer" containerID="a818a573bcb4067329de1d8d710f6bd33600aac6dc19092c04354c15ce05f211"
	Oct 17 20:08:10 embed-certs-572724 kubelet[770]: I1017 20:08:10.120824     770 scope.go:117] "RemoveContainer" containerID="2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1"
	Oct 17 20:08:10 embed-certs-572724 kubelet[770]: E1017 20:08:10.120983     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zr5k_kubernetes-dashboard(dbe89ec1-24d2-4266-baf3-18b8fe7a333f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k" podUID="dbe89ec1-24d2-4266-baf3-18b8fe7a333f"
	Oct 17 20:08:11 embed-certs-572724 kubelet[770]: I1017 20:08:11.124298     770 scope.go:117] "RemoveContainer" containerID="2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1"
	Oct 17 20:08:11 embed-certs-572724 kubelet[770]: E1017 20:08:11.125213     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zr5k_kubernetes-dashboard(dbe89ec1-24d2-4266-baf3-18b8fe7a333f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k" podUID="dbe89ec1-24d2-4266-baf3-18b8fe7a333f"
	Oct 17 20:08:16 embed-certs-572724 kubelet[770]: I1017 20:08:16.533700     770 scope.go:117] "RemoveContainer" containerID="2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1"
	Oct 17 20:08:16 embed-certs-572724 kubelet[770]: E1017 20:08:16.533908     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zr5k_kubernetes-dashboard(dbe89ec1-24d2-4266-baf3-18b8fe7a333f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k" podUID="dbe89ec1-24d2-4266-baf3-18b8fe7a333f"
	Oct 17 20:08:22 embed-certs-572724 kubelet[770]: I1017 20:08:22.154471     770 scope.go:117] "RemoveContainer" containerID="bd8bdd7d12816cda744332cb3b34ffb8e05940de2f7dada91b4a4b21564e0d39"
	Oct 17 20:08:30 embed-certs-572724 kubelet[770]: I1017 20:08:30.796644     770 scope.go:117] "RemoveContainer" containerID="2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1"
	Oct 17 20:08:31 embed-certs-572724 kubelet[770]: I1017 20:08:31.178972     770 scope.go:117] "RemoveContainer" containerID="2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1"
	Oct 17 20:08:31 embed-certs-572724 kubelet[770]: I1017 20:08:31.179321     770 scope.go:117] "RemoveContainer" containerID="e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717"
	Oct 17 20:08:31 embed-certs-572724 kubelet[770]: E1017 20:08:31.179500     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zr5k_kubernetes-dashboard(dbe89ec1-24d2-4266-baf3-18b8fe7a333f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k" podUID="dbe89ec1-24d2-4266-baf3-18b8fe7a333f"
	Oct 17 20:08:36 embed-certs-572724 kubelet[770]: I1017 20:08:36.533780     770 scope.go:117] "RemoveContainer" containerID="e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717"
	Oct 17 20:08:36 embed-certs-572724 kubelet[770]: E1017 20:08:36.533966     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zr5k_kubernetes-dashboard(dbe89ec1-24d2-4266-baf3-18b8fe7a333f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k" podUID="dbe89ec1-24d2-4266-baf3-18b8fe7a333f"
	Oct 17 20:08:39 embed-certs-572724 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:08:39 embed-certs-572724 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:08:39 embed-certs-572724 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [3fc1fbe7031f4ac9b13cdb2127e2a107fca355c0213ff06b195a73131962e39d] <==
	2025/10/17 20:08:03 Starting overwatch
	2025/10/17 20:08:03 Using namespace: kubernetes-dashboard
	2025/10/17 20:08:03 Using in-cluster config to connect to apiserver
	2025/10/17 20:08:03 Using secret token for csrf signing
	2025/10/17 20:08:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:08:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:08:03 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:08:03 Generating JWE encryption key
	2025/10/17 20:08:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:08:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:08:04 Initializing JWE encryption key from synchronized object
	2025/10/17 20:08:04 Creating in-cluster Sidecar client
	2025/10/17 20:08:04 Serving insecurely on HTTP port: 9090
	2025/10/17 20:08:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:08:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b339d56587d6d45e174144f0a9270220b632eb089d17efc47aa29734ab8aa116] <==
	I1017 20:08:22.253768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:08:22.265150       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:08:22.265257       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:08:22.268442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:25.724029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:29.984434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:33.583018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:36.637926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:39.660965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:39.670659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:08:39.670844       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:08:39.671291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4bb96dd9-2ce5-40c2-b9ba-fad4b582ad41", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-572724_58e2cfac-1c03-4756-a431-519f0676acfd became leader
	I1017 20:08:39.671415       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-572724_58e2cfac-1c03-4756-a431-519f0676acfd!
	W1017 20:08:39.695801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:39.709148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:08:39.774251       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-572724_58e2cfac-1c03-4756-a431-519f0676acfd!
	W1017 20:08:41.723947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:41.747965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bd8bdd7d12816cda744332cb3b34ffb8e05940de2f7dada91b4a4b21564e0d39] <==
	I1017 20:07:51.513499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:08:21.564160       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-572724 -n embed-certs-572724
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-572724 -n embed-certs-572724: exit status 2 (412.534672ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-572724 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-572724
helpers_test.go:243: (dbg) docker inspect embed-certs-572724:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e",
	        "Created": "2025-10-17T20:05:49.604188435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 468432,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:07:35.121148435Z",
	            "FinishedAt": "2025-10-17T20:07:34.144323376Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/hosts",
	        "LogPath": "/var/lib/docker/containers/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e-json.log",
	        "Name": "/embed-certs-572724",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-572724:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-572724",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e",
	                "LowerDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c267fed6d4387f13797f2bc94da46399358babf00e15121ce773a82fcdf04251/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-572724",
	                "Source": "/var/lib/docker/volumes/embed-certs-572724/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-572724",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-572724",
	                "name.minikube.sigs.k8s.io": "embed-certs-572724",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9520b0333d59035ca2a9dd8ed87a1f0db75cc5d2fc6e774fb16fd06822c793a5",
	            "SandboxKey": "/var/run/docker/netns/9520b0333d59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-572724": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:14:71:c7:5a:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1786ab454405791896f6daa543404507b38480aaf90e1b61a39fa7a7767ad3ab",
	                    "EndpointID": "b8e590f4e6cd92cb3c0689020f37a921b5756727b4b3bc176027f0e93e27c90c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-572724",
	                        "6c48c7c23063"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-572724 -n embed-certs-572724
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-572724 -n embed-certs-572724: exit status 2 (368.541261ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-572724 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-572724 logs -n 25: (1.4811749s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:04 UTC │ 17 Oct 25 20:04 UTC │
	│ image   │ old-k8s-version-135652 image list --format=json                                                                                                                                                                                               │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ pause   │ -p old-k8s-version-135652 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │                     │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164379       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:06 UTC │
	│ delete  │ -p cert-expiration-164379                                                                                                                                                                                                                     │ cert-expiration-164379       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p no-preload-413711 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p no-preload-413711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ stop    │ -p embed-certs-572724 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-572724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ image   │ no-preload-413711 image list --format=json                                                                                                                                                                                                    │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ pause   │ -p no-preload-413711 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-672422                                                                                                                                                                                                               │ disable-driver-mounts-672422 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ image   │ embed-certs-572724 image list --format=json                                                                                                                                                                                                   │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ pause   │ -p embed-certs-572724 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:07:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:07:56.130484  471476 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:07:56.130630  471476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:56.130643  471476 out.go:374] Setting ErrFile to fd 2...
	I1017 20:07:56.130648  471476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:07:56.130946  471476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:07:56.131408  471476 out.go:368] Setting JSON to false
	I1017 20:07:56.132484  471476 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10227,"bootTime":1760721449,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:07:56.132596  471476 start.go:141] virtualization:  
	I1017 20:07:56.136430  471476 out.go:179] * [default-k8s-diff-port-740780] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:07:56.139632  471476 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:07:56.139676  471476 notify.go:220] Checking for updates...
	I1017 20:07:56.145728  471476 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:07:56.148734  471476 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:07:56.151653  471476 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:07:56.154631  471476 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:07:56.157535  471476 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:07:56.161045  471476 config.go:182] Loaded profile config "embed-certs-572724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:56.161211  471476 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:07:56.191265  471476 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:07:56.191389  471476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:56.255038  471476 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 20:07:56.245369353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:56.255154  471476 docker.go:318] overlay module found
	I1017 20:07:56.258516  471476 out.go:179] * Using the docker driver based on user configuration
	I1017 20:07:56.261426  471476 start.go:305] selected driver: docker
	I1017 20:07:56.261449  471476 start.go:925] validating driver "docker" against <nil>
	I1017 20:07:56.261470  471476 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:07:56.262302  471476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:07:56.317447  471476 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 20:07:56.30744766 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:07:56.317615  471476 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:07:56.317856  471476 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:07:56.320950  471476 out.go:179] * Using Docker driver with root privileges
	I1017 20:07:56.323789  471476 cni.go:84] Creating CNI manager for ""
	I1017 20:07:56.323858  471476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:07:56.323870  471476 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:07:56.323943  471476 start.go:349] cluster config:
	{Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:07:56.328846  471476 out.go:179] * Starting "default-k8s-diff-port-740780" primary control-plane node in "default-k8s-diff-port-740780" cluster
	I1017 20:07:56.331667  471476 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:07:56.334623  471476 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:07:56.337502  471476 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:07:56.337562  471476 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:07:56.337577  471476 cache.go:58] Caching tarball of preloaded images
	I1017 20:07:56.337587  471476 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:07:56.337659  471476 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:07:56.337669  471476 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:07:56.337786  471476 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/config.json ...
	I1017 20:07:56.337807  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/config.json: {Name:mkc8368c13a19534d51dd5675e2c2c5fbe4b66d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:56.358840  471476 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:07:56.358865  471476 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:07:56.358884  471476 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:07:56.358957  471476 start.go:360] acquireMachinesLock for default-k8s-diff-port-740780: {Name:mkb4281c63cf8ac1be83a7647fdf1335968a6b70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:07:56.359109  471476 start.go:364] duration metric: took 130.745µs to acquireMachinesLock for "default-k8s-diff-port-740780"
	I1017 20:07:56.359140  471476 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:07:56.359259  471476 start.go:125] createHost starting for "" (driver="docker")
	W1017 20:07:56.361240  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:07:58.361303  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:07:56.362830  471476 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:07:56.363053  471476 start.go:159] libmachine.API.Create for "default-k8s-diff-port-740780" (driver="docker")
	I1017 20:07:56.363105  471476 client.go:168] LocalClient.Create starting
	I1017 20:07:56.363905  471476 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem
	I1017 20:07:56.363950  471476 main.go:141] libmachine: Decoding PEM data...
	I1017 20:07:56.363966  471476 main.go:141] libmachine: Parsing certificate...
	I1017 20:07:56.364384  471476 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem
	I1017 20:07:56.364422  471476 main.go:141] libmachine: Decoding PEM data...
	I1017 20:07:56.364434  471476 main.go:141] libmachine: Parsing certificate...
	I1017 20:07:56.364908  471476 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-740780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:07:56.380571  471476 cli_runner.go:211] docker network inspect default-k8s-diff-port-740780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:07:56.380653  471476 network_create.go:284] running [docker network inspect default-k8s-diff-port-740780] to gather additional debugging logs...
	I1017 20:07:56.380681  471476 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-740780
	W1017 20:07:56.396739  471476 cli_runner.go:211] docker network inspect default-k8s-diff-port-740780 returned with exit code 1
	I1017 20:07:56.396778  471476 network_create.go:287] error running [docker network inspect default-k8s-diff-port-740780]: docker network inspect default-k8s-diff-port-740780: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-740780 not found
	I1017 20:07:56.396793  471476 network_create.go:289] output of [docker network inspect default-k8s-diff-port-740780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-740780 not found
	
	** /stderr **
	I1017 20:07:56.396889  471476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:07:56.413488  471476 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f667d9c3ea2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:fc:1d:c6:d2:da} reservation:<nil>}
	I1017 20:07:56.413763  471476 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-82a22734829b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:5a:78:c5:e0:0a} reservation:<nil>}
	I1017 20:07:56.414111  471476 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b88bd3b523f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:75:74:cd:15:9b} reservation:<nil>}
	I1017 20:07:56.414545  471476 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0c2a0}
	I1017 20:07:56.414568  471476 network_create.go:124] attempt to create docker network default-k8s-diff-port-740780 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1017 20:07:56.414625  471476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-740780 default-k8s-diff-port-740780
	I1017 20:07:56.480726  471476 network_create.go:108] docker network default-k8s-diff-port-740780 192.168.76.0/24 created
	I1017 20:07:56.480758  471476 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-740780" container
	I1017 20:07:56.480854  471476 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:07:56.498178  471476 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-740780 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-740780 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:07:56.524270  471476 oci.go:103] Successfully created a docker volume default-k8s-diff-port-740780
	I1017 20:07:56.524379  471476 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-740780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-740780 --entrypoint /usr/bin/test -v default-k8s-diff-port-740780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:07:57.139001  471476 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-740780
	I1017 20:07:57.139048  471476 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:07:57.139067  471476 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:07:57.139209  471476 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-740780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 20:08:00.437494  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:02.859463  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:08:03.227642  471476 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-740780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.08838585s)
	I1017 20:08:03.227672  471476 kic.go:203] duration metric: took 6.088601049s to extract preloaded images to volume ...
	W1017 20:08:03.227816  471476 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 20:08:03.227927  471476 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:08:03.294002  471476 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-740780 --name default-k8s-diff-port-740780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-740780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-740780 --network default-k8s-diff-port-740780 --ip 192.168.76.2 --volume default-k8s-diff-port-740780:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:08:03.805508  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Running}}
	I1017 20:08:03.836656  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:03.863379  471476 cli_runner.go:164] Run: docker exec default-k8s-diff-port-740780 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:08:03.934792  471476 oci.go:144] the created container "default-k8s-diff-port-740780" has a running status.
	I1017 20:08:03.934827  471476 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa...
	I1017 20:08:05.851759  471476 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:08:05.879668  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:05.904452  471476 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:08:05.904477  471476 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-740780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:08:05.955559  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:05.981818  471476 machine.go:93] provisionDockerMachine start ...
	I1017 20:08:05.981912  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:06.004157  471476 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:06.004682  471476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I1017 20:08:06.004700  471476 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:08:06.005670  471476 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1017 20:08:04.870892  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:07.362039  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:09.366370  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:08:09.168644  471476 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-740780
	
	I1017 20:08:09.168666  471476 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-740780"
	I1017 20:08:09.168729  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:09.195815  471476 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:09.196132  471476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I1017 20:08:09.196145  471476 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-740780 && echo "default-k8s-diff-port-740780" | sudo tee /etc/hostname
	I1017 20:08:09.359547  471476 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-740780
	
	I1017 20:08:09.359620  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:09.379371  471476 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:09.379711  471476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I1017 20:08:09.379729  471476 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-740780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-740780/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-740780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:08:09.532825  471476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:08:09.532871  471476 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:08:09.532893  471476 ubuntu.go:190] setting up certificates
	I1017 20:08:09.532902  471476 provision.go:84] configureAuth start
	I1017 20:08:09.532965  471476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-740780
	I1017 20:08:09.550500  471476 provision.go:143] copyHostCerts
	I1017 20:08:09.550565  471476 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:08:09.550576  471476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:08:09.550652  471476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:08:09.550739  471476 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:08:09.550745  471476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:08:09.550768  471476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:08:09.550818  471476 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:08:09.550823  471476 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:08:09.550845  471476 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:08:09.550889  471476 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-740780 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-740780 localhost minikube]
	I1017 20:08:09.912917  471476 provision.go:177] copyRemoteCerts
	I1017 20:08:09.912984  471476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:08:09.913035  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:09.930982  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:10.037530  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:08:10.057354  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1017 20:08:10.076382  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:08:10.098841  471476 provision.go:87] duration metric: took 565.923647ms to configureAuth
	I1017 20:08:10.098876  471476 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:08:10.099099  471476 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:08:10.099230  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:10.121230  471476 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:10.124488  471476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I1017 20:08:10.124546  471476 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:08:10.484226  471476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:08:10.484281  471476 machine.go:96] duration metric: took 4.502440537s to provisionDockerMachine
	I1017 20:08:10.484307  471476 client.go:171] duration metric: took 14.121191487s to LocalClient.Create
	I1017 20:08:10.484356  471476 start.go:167] duration metric: took 14.121302113s to libmachine.API.Create "default-k8s-diff-port-740780"
	I1017 20:08:10.484379  471476 start.go:293] postStartSetup for "default-k8s-diff-port-740780" (driver="docker")
	I1017 20:08:10.484404  471476 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:08:10.484498  471476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:08:10.484599  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:10.501824  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:10.604870  471476 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:08:10.609426  471476 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:08:10.609456  471476 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:08:10.609468  471476 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:08:10.609544  471476 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:08:10.609631  471476 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:08:10.609738  471476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:08:10.617698  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:08:10.636354  471476 start.go:296] duration metric: took 151.944566ms for postStartSetup
	I1017 20:08:10.636784  471476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-740780
	I1017 20:08:10.653853  471476 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/config.json ...
	I1017 20:08:10.654154  471476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:08:10.654211  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:10.671051  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:10.773458  471476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:08:10.778327  471476 start.go:128] duration metric: took 14.41905032s to createHost
	I1017 20:08:10.778362  471476 start.go:83] releasing machines lock for "default-k8s-diff-port-740780", held for 14.419240493s
	I1017 20:08:10.778470  471476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-740780
	I1017 20:08:10.796807  471476 ssh_runner.go:195] Run: cat /version.json
	I1017 20:08:10.796859  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:10.796866  471476 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:08:10.796927  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:10.815679  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:10.820385  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:11.020151  471476 ssh_runner.go:195] Run: systemctl --version
	I1017 20:08:11.026822  471476 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:08:11.061990  471476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:08:11.066532  471476 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:08:11.066676  471476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:08:11.095325  471476 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 20:08:11.095351  471476 start.go:495] detecting cgroup driver to use...
	I1017 20:08:11.095406  471476 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:08:11.095479  471476 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:08:11.114767  471476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:08:11.133206  471476 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:08:11.133309  471476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:08:11.153924  471476 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:08:11.174100  471476 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:08:11.306397  471476 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:08:11.447880  471476 docker.go:234] disabling docker service ...
	I1017 20:08:11.448017  471476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:08:11.470150  471476 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:08:11.485325  471476 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:08:11.600871  471476 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:08:11.711969  471476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:08:11.725875  471476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:08:11.740728  471476 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:08:11.740835  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.750217  471476 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:08:11.750301  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.759114  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.767799  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.778890  471476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:08:11.787617  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.796501  471476 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.813193  471476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:08:11.822753  471476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:08:11.830294  471476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:08:11.838099  471476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:08:11.964044  471476 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:08:12.288699  471476 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:08:12.288776  471476 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:08:12.293890  471476 start.go:563] Will wait 60s for crictl version
	I1017 20:08:12.293992  471476 ssh_runner.go:195] Run: which crictl
	I1017 20:08:12.298634  471476 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:08:12.325899  471476 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:08:12.326005  471476 ssh_runner.go:195] Run: crio --version
	I1017 20:08:12.355928  471476 ssh_runner.go:195] Run: crio --version
	I1017 20:08:12.391246  471476 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1017 20:08:11.860577  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:14.361081  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:08:12.394105  471476 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-740780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:08:12.411072  471476 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 20:08:12.415468  471476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:08:12.424967  471476 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:08:12.425082  471476 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:08:12.425139  471476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:08:12.461920  471476 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:08:12.461944  471476 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:08:12.462002  471476 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:08:12.487289  471476 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:08:12.487315  471476 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:08:12.487324  471476 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1017 20:08:12.487410  471476 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-740780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:08:12.487494  471476 ssh_runner.go:195] Run: crio config
	I1017 20:08:12.553162  471476 cni.go:84] Creating CNI manager for ""
	I1017 20:08:12.553188  471476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:08:12.553205  471476 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:08:12.553228  471476 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-740780 NodeName:default-k8s-diff-port-740780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:08:12.553358  471476 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-740780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:08:12.553438  471476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:08:12.565451  471476 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:08:12.565524  471476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:08:12.574553  471476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 20:08:12.587897  471476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:08:12.601751  471476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1017 20:08:12.615377  471476 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:08:12.618841  471476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:08:12.628376  471476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:08:12.744492  471476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:08:12.760368  471476 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780 for IP: 192.168.76.2
	I1017 20:08:12.760387  471476 certs.go:195] generating shared ca certs ...
	I1017 20:08:12.760402  471476 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:12.760613  471476 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:08:12.760659  471476 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:08:12.760667  471476 certs.go:257] generating profile certs ...
	I1017 20:08:12.760721  471476 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.key
	I1017 20:08:12.760732  471476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt with IP's: []
	I1017 20:08:13.283368  471476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt ...
	I1017 20:08:13.283402  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: {Name:mkdcfa98906e44150f55d463818efda9144d9a82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:13.283602  471476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.key ...
	I1017 20:08:13.283621  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.key: {Name:mk4850b021ac99e7073fadd55c4842af8142c277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:13.283721  471476 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key.79d0c2c9
	I1017 20:08:13.283741  471476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt.79d0c2c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1017 20:08:14.582058  471476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt.79d0c2c9 ...
	I1017 20:08:14.582092  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt.79d0c2c9: {Name:mk9b6998f2a4fe254f3f17cfb7afa631ef0192cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:14.582344  471476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key.79d0c2c9 ...
	I1017 20:08:14.582364  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key.79d0c2c9: {Name:mkc471f426f45c198cb2f26b3488174f65aae5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:14.582510  471476 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt.79d0c2c9 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt
	I1017 20:08:14.582621  471476 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key.79d0c2c9 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key
	I1017 20:08:14.582688  471476 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key
	I1017 20:08:14.582711  471476 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.crt with IP's: []
	I1017 20:08:15.735954  471476 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.crt ...
	I1017 20:08:15.735985  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.crt: {Name:mk69b2b16cddbbfea363d64b0c07e981e5ba15fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:15.736173  471476 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key ...
	I1017 20:08:15.736224  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key: {Name:mk82a2a7362a1acefc9ebfc6b1ca0c874cff93d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:15.736427  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:08:15.736475  471476 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:08:15.736489  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:08:15.736513  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:08:15.736562  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:08:15.736591  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:08:15.736642  471476 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:08:15.737261  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:08:15.755871  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:08:15.775053  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:08:15.799705  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:08:15.819800  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:08:15.839532  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:08:15.860010  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:08:15.878687  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:08:15.895733  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:08:15.917814  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:08:15.935680  471476 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:08:15.953223  471476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:08:15.965806  471476 ssh_runner.go:195] Run: openssl version
	I1017 20:08:15.971953  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:08:15.980011  471476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:08:15.983673  471476 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:08:15.983738  471476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:08:16.025264  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:08:16.034352  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:08:16.042940  471476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:08:16.046971  471476 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:08:16.047079  471476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:08:16.088217  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:08:16.096634  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:08:16.105151  471476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:08:16.109004  471476 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:08:16.109074  471476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:08:16.150224  471476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:08:16.158332  471476 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:08:16.161837  471476 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:08:16.161891  471476 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:08:16.161964  471476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:08:16.162021  471476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:08:16.188822  471476 cri.go:89] found id: ""
	I1017 20:08:16.188901  471476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:08:16.196785  471476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:08:16.209092  471476 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:08:16.209204  471476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:08:16.217695  471476 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:08:16.217757  471476 kubeadm.go:157] found existing configuration files:
	
	I1017 20:08:16.217823  471476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1017 20:08:16.225697  471476 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:08:16.225781  471476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:08:16.233284  471476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1017 20:08:16.240900  471476 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:08:16.240973  471476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:08:16.248894  471476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1017 20:08:16.256616  471476 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:08:16.256709  471476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:08:16.264195  471476 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1017 20:08:16.272200  471476 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:08:16.272264  471476 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:08:16.280088  471476 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:08:16.324174  471476 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:08:16.324240  471476 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:08:16.357384  471476 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:08:16.357459  471476 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 20:08:16.357503  471476 kubeadm.go:318] OS: Linux
	I1017 20:08:16.357555  471476 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:08:16.357605  471476 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 20:08:16.357654  471476 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:08:16.357704  471476 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:08:16.357754  471476 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:08:16.357805  471476 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:08:16.357852  471476 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:08:16.357907  471476 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:08:16.357960  471476 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 20:08:16.431190  471476 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:08:16.431415  471476 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:08:16.431569  471476 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:08:16.439133  471476 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1017 20:08:16.860887  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:18.861307  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:08:16.444395  471476 out.go:252]   - Generating certificates and keys ...
	I1017 20:08:16.444581  471476 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:08:16.444679  471476 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:08:17.191151  471476 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:08:17.269311  471476 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:08:17.573364  471476 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:08:18.461369  471476 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:08:18.723488  471476 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:08:18.723736  471476 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-740780 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 20:08:18.988867  471476 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:08:18.989174  471476 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-740780 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 20:08:19.479053  471476 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:08:20.549373  471476 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	W1017 20:08:20.861473  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	W1017 20:08:23.361902  468306 pod_ready.go:104] pod "coredns-66bc5c9577-q9n55" is not "Ready", error: <nil>
	I1017 20:08:22.104364  471476 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:08:22.104704  471476 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:08:23.520635  471476 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:08:24.062218  471476 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:08:24.429027  471476 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:08:24.891370  471476 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:08:25.475152  471476 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:08:25.475971  471476 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:08:25.478568  471476 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:08:25.481915  471476 out.go:252]   - Booting up control plane ...
	I1017 20:08:25.482019  471476 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:08:25.482106  471476 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:08:25.482185  471476 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:08:25.505687  471476 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:08:25.505808  471476 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:08:25.514215  471476 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:08:25.514537  471476 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:08:25.514779  471476 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:08:25.656978  471476 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:08:25.661908  471476 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:08:25.359729  468306 pod_ready.go:94] pod "coredns-66bc5c9577-q9n55" is "Ready"
	I1017 20:08:25.359803  468306 pod_ready.go:86] duration metric: took 33.005558762s for pod "coredns-66bc5c9577-q9n55" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.362573  468306 pod_ready.go:83] waiting for pod "etcd-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.367239  468306 pod_ready.go:94] pod "etcd-embed-certs-572724" is "Ready"
	I1017 20:08:25.367255  468306 pod_ready.go:86] duration metric: took 4.665006ms for pod "etcd-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.369418  468306 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.373508  468306 pod_ready.go:94] pod "kube-apiserver-embed-certs-572724" is "Ready"
	I1017 20:08:25.373559  468306 pod_ready.go:86] duration metric: took 4.125337ms for pod "kube-apiserver-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.375609  468306 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.559095  468306 pod_ready.go:94] pod "kube-controller-manager-embed-certs-572724" is "Ready"
	I1017 20:08:25.559127  468306 pod_ready.go:86] duration metric: took 183.479261ms for pod "kube-controller-manager-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:25.757437  468306 pod_ready.go:83] waiting for pod "kube-proxy-2jxkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:26.159210  468306 pod_ready.go:94] pod "kube-proxy-2jxkk" is "Ready"
	I1017 20:08:26.159297  468306 pod_ready.go:86] duration metric: took 401.816788ms for pod "kube-proxy-2jxkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:26.358105  468306 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:26.758460  468306 pod_ready.go:94] pod "kube-scheduler-embed-certs-572724" is "Ready"
	I1017 20:08:26.758485  468306 pod_ready.go:86] duration metric: took 400.298437ms for pod "kube-scheduler-embed-certs-572724" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:08:26.758499  468306 pod_ready.go:40] duration metric: took 34.410781325s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:08:26.877389  468306 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:08:26.880455  468306 out.go:179] * Done! kubectl is now configured to use "embed-certs-572724" cluster and "default" namespace by default
	I1017 20:08:28.163237  471476 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.501429763s
	I1017 20:08:28.166885  471476 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:08:28.166984  471476 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1017 20:08:28.167077  471476 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:08:28.167160  471476 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 20:08:30.957099  471476 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.789628271s
	I1017 20:08:32.473922  471476 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.307035154s
	I1017 20:08:34.168512  471476 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001505197s
	I1017 20:08:34.191329  471476 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:08:34.214350  471476 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:08:34.227790  471476 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:08:34.228037  471476 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-740780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:08:34.245096  471476 kubeadm.go:318] [bootstrap-token] Using token: 6bl1gy.fzpcm8t5vlrraadh
	I1017 20:08:34.248272  471476 out.go:252]   - Configuring RBAC rules ...
	I1017 20:08:34.248409  471476 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:08:34.252995  471476 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:08:34.263465  471476 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:08:34.267767  471476 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:08:34.272312  471476 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:08:34.276356  471476 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:08:34.576889  471476 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:08:35.025049  471476 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:08:35.575142  471476 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:08:35.576400  471476 kubeadm.go:318] 
	I1017 20:08:35.576474  471476 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:08:35.576480  471476 kubeadm.go:318] 
	I1017 20:08:35.576596  471476 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:08:35.576604  471476 kubeadm.go:318] 
	I1017 20:08:35.576629  471476 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:08:35.576690  471476 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:08:35.576743  471476 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:08:35.576747  471476 kubeadm.go:318] 
	I1017 20:08:35.576803  471476 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:08:35.576808  471476 kubeadm.go:318] 
	I1017 20:08:35.576858  471476 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:08:35.576862  471476 kubeadm.go:318] 
	I1017 20:08:35.576923  471476 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:08:35.577002  471476 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:08:35.577073  471476 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:08:35.577083  471476 kubeadm.go:318] 
	I1017 20:08:35.577171  471476 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:08:35.577252  471476 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:08:35.577256  471476 kubeadm.go:318] 
	I1017 20:08:35.577343  471476 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 6bl1gy.fzpcm8t5vlrraadh \
	I1017 20:08:35.577451  471476 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 \
	I1017 20:08:35.577472  471476 kubeadm.go:318] 	--control-plane 
	I1017 20:08:35.577476  471476 kubeadm.go:318] 
	I1017 20:08:35.577564  471476 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:08:35.577569  471476 kubeadm.go:318] 
	I1017 20:08:35.577654  471476 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 6bl1gy.fzpcm8t5vlrraadh \
	I1017 20:08:35.577760  471476 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 
	I1017 20:08:35.580892  471476 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 20:08:35.581125  471476 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 20:08:35.581234  471476 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:08:35.581250  471476 cni.go:84] Creating CNI manager for ""
	I1017 20:08:35.581258  471476 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:08:35.584504  471476 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:08:35.587408  471476 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:08:35.591818  471476 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:08:35.591841  471476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:08:35.605693  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:08:35.908506  471476 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:08:35.908671  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:35.908758  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-740780 minikube.k8s.io/updated_at=2025_10_17T20_08_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=default-k8s-diff-port-740780 minikube.k8s.io/primary=true
	I1017 20:08:36.119224  471476 ops.go:34] apiserver oom_adj: -16
	I1017 20:08:36.119362  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:36.619582  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:37.119516  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:37.620443  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:38.120143  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:38.619459  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:39.120319  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:39.620315  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:40.119398  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:40.619941  471476 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:08:40.830186  471476 kubeadm.go:1113] duration metric: took 4.921566276s to wait for elevateKubeSystemPrivileges
	I1017 20:08:40.830212  471476 kubeadm.go:402] duration metric: took 24.668325763s to StartCluster
	I1017 20:08:40.830228  471476 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:40.830291  471476 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:08:40.831855  471476 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:40.832083  471476 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:08:40.832465  471476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:08:40.832679  471476 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:08:40.832762  471476 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-740780"
	I1017 20:08:40.832776  471476 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-740780"
	I1017 20:08:40.832800  471476 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:08:40.832838  471476 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:08:40.832877  471476 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-740780"
	I1017 20:08:40.832888  471476 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-740780"
	I1017 20:08:40.833220  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:40.833262  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:40.844337  471476 out.go:179] * Verifying Kubernetes components...
	I1017 20:08:40.847326  471476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:08:40.891266  471476 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-740780"
	I1017 20:08:40.891315  471476 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:08:40.891778  471476 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:08:40.908202  471476 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:08:40.911129  471476 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:08:40.911154  471476 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:08:40.911217  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:40.971166  471476 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:08:40.971193  471476 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:08:40.971257  471476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:08:40.982114  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:41.007068  471476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:08:41.258193  471476 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:08:41.258421  471476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:08:41.448124  471476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:08:41.722077  471476 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:08:42.055626  471476 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1017 20:08:42.058467  471476 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-740780" to be "Ready" ...
	I1017 20:08:42.562153  471476 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-740780" context rescaled to 1 replicas
	I1017 20:08:42.679759  471476 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.231594752s)
	I1017 20:08:42.695237  471476 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.797549022Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=743a8558-6ce6-4ac8-8024-a37f70c8a33e name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.799077809Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4494190b-2e83-4d54-952a-e575c909607c name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.800098721Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k/dashboard-metrics-scraper" id=e9bd0d80-4b2b-4a14-bc52-098508595228 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.800327334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.810376203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.812915835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.849477315Z" level=info msg="Created container e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k/dashboard-metrics-scraper" id=e9bd0d80-4b2b-4a14-bc52-098508595228 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.850583221Z" level=info msg="Starting container: e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717" id=e8dceeaa-61f2-46cc-bba0-d0a482391f64 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:08:30 embed-certs-572724 crio[646]: time="2025-10-17T20:08:30.852395357Z" level=info msg="Started container" PID=1663 containerID=e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k/dashboard-metrics-scraper id=e8dceeaa-61f2-46cc-bba0-d0a482391f64 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0553efe778e12f4a5596685af577687a94aac33ea614ce1f7c2bd412ffcaffe2
	Oct 17 20:08:30 embed-certs-572724 conmon[1661]: conmon e09ff9a6ff0e54673acb <ninfo>: container 1663 exited with status 1
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.183641803Z" level=info msg="Removing container: 2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1" id=9bdac920-802e-47c8-89a8-757e647cbc9a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.191681626Z" level=info msg="Error loading conmon cgroup of container 2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1: cgroup deleted" id=9bdac920-802e-47c8-89a8-757e647cbc9a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.197988114Z" level=info msg="Removed container 2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k/dashboard-metrics-scraper" id=9bdac920-802e-47c8-89a8-757e647cbc9a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.407931069Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.413358016Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.413527948Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.413606715Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.419950674Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.41998648Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.420003621Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.428151732Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.428187859Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.428213039Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.431333152Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:08:31 embed-certs-572724 crio[646]: time="2025-10-17T20:08:31.431518484Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e09ff9a6ff0e5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   2                   0553efe778e12       dashboard-metrics-scraper-6ffb444bf9-8zr5k   kubernetes-dashboard
	b339d56587d6d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   c8baa579adc34       storage-provisioner                          kube-system
	3fc1fbe7031f4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   db5e8dc78a58b       kubernetes-dashboard-855c9754f9-2gxq5        kubernetes-dashboard
	1bd164b19059d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   265c02a6702c1       busybox                                      default
	616441d046923       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   3418026f9340b       kindnet-cg6w6                                kube-system
	c6e6947cc661c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   b78a39df92670       coredns-66bc5c9577-q9n55                     kube-system
	f2819e934092e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   0fd7a1e937c36       kube-proxy-2jxkk                             kube-system
	bd8bdd7d12816       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   c8baa579adc34       storage-provisioner                          kube-system
	711a3fa869605       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   91a1be632a717       kube-controller-manager-embed-certs-572724   kube-system
	e224a6e5eb1ca       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   a22127695ef2a       kube-apiserver-embed-certs-572724            kube-system
	0c97fc08388e7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   acaa92dc160da       kube-scheduler-embed-certs-572724            kube-system
	2e90f4799ad4c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   462ae833db130       etcd-embed-certs-572724                      kube-system
	
	
	==> coredns [c6e6947cc661c44f39b176dbf73fa36646f4d009c8d033da09d40f50914d3312] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52533 - 16189 "HINFO IN 8006110790023923564.2537412446222084605. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003645143s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-572724
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-572724
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=embed-certs-572724
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_06_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:06:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-572724
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:08:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:08:20 +0000   Fri, 17 Oct 2025 20:06:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:08:20 +0000   Fri, 17 Oct 2025 20:06:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:08:20 +0000   Fri, 17 Oct 2025 20:06:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:08:20 +0000   Fri, 17 Oct 2025 20:07:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-572724
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                20557b6e-804a-45ff-a381-36f74b0f1294
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-q9n55                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-embed-certs-572724                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-cg6w6                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-embed-certs-572724             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-embed-certs-572724    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-2jxkk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-embed-certs-572724             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8zr5k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2gxq5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node embed-certs-572724 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node embed-certs-572724 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node embed-certs-572724 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node embed-certs-572724 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node embed-certs-572724 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node embed-certs-572724 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node embed-certs-572724 event: Registered Node embed-certs-572724 in Controller
	  Normal   NodeReady                98s                    kubelet          Node embed-certs-572724 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node embed-certs-572724 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node embed-certs-572724 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node embed-certs-572724 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node embed-certs-572724 event: Registered Node embed-certs-572724 in Controller
	
	
	==> dmesg <==
	[Oct17 19:45] overlayfs: idmapped layers are currently not supported
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	[ +30.002292] overlayfs: idmapped layers are currently not supported
	[Oct17 20:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2e90f4799ad4c01480d7887c5d52c632cc0dc3dea6d59784485224961e8a45af] <==
	{"level":"warn","ts":"2025-10-17T20:07:46.277600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.307692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.340717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.383141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.415929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.455141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.487313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.532141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.587890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.634345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.668329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.697176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.736498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.752331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.783406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.873687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.889172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.979325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:46.985691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.014079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.042418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.077798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.100200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.124914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:07:47.291464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47008","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:08:45 up  2:51,  0 user,  load average: 5.30, 4.81, 3.44
	Linux embed-certs-572724 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [616441d046923f26d47dc809cc6b9e4d2928b5f8fe7cdb708bd4cc510cc8b27e] <==
	I1017 20:07:51.200282       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:07:51.213274       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 20:07:51.213471       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:07:51.213485       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:07:51.213499       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:07:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:07:51.407620       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:07:51.407689       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:07:51.407721       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:07:51.408991       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:08:21.408298       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 20:08:21.408432       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 20:08:21.408589       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:08:21.409538       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1017 20:08:22.608169       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:08:22.608211       1 metrics.go:72] Registering metrics
	I1017 20:08:22.608273       1 controller.go:711] "Syncing nftables rules"
	I1017 20:08:31.407404       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:08:31.407461       1 main.go:301] handling current node
	I1017 20:08:41.408665       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 20:08:41.408699       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e224a6e5eb1ca81a4a48fbcc8536252f742bddc7bc1c3afbd37a26b29ac8c998] <==
	I1017 20:07:49.564641       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:07:49.564768       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 20:07:49.564988       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:07:49.576813       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:07:49.576843       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:07:49.576850       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:07:49.576857       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:07:49.579111       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:07:49.579972       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:07:49.580060       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:07:49.619040       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:07:49.648783       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:07:49.655810       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:07:49.767946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1017 20:07:49.840289       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:07:50.010902       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:07:51.333858       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:07:51.613590       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:07:51.806443       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:07:51.849060       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:07:52.141000       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.35.189"}
	I1017 20:07:52.232362       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.32.142"}
	I1017 20:07:54.541036       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:07:54.666637       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:07:54.777695       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [711a3fa869605d5a18b3f9781975225dfdd63bf72d85af3b2ba7101a28d13528] <==
	I1017 20:07:54.128367       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:07:54.131054       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:07:54.131077       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:07:54.131084       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:07:54.131175       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:07:54.133423       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:07:54.135993       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:07:54.139044       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:07:54.141171       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:07:54.142595       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:07:54.148755       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:07:54.151095       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:07:54.152136       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:07:54.158403       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:07:54.169667       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:07:54.176040       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:07:54.176144       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:07:54.176169       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:07:54.176263       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:07:54.177173       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:07:54.178273       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:07:54.180624       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:07:54.180732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:07:54.813986       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1017 20:07:54.814068       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [f2819e934092e79c2bb65da9f76d0f0615b9efa4dae95114b34ceb074d2f63b2] <==
	I1017 20:07:52.276926       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:07:52.426252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:07:52.526344       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:07:52.526448       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 20:07:52.526542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:07:52.556387       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:07:52.556573       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:07:52.569079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:07:52.569478       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:07:52.569700       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:52.570975       1 config.go:200] "Starting service config controller"
	I1017 20:07:52.571093       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:07:52.571143       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:07:52.571170       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:07:52.571205       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:07:52.571232       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:07:52.571897       1 config.go:309] "Starting node config controller"
	I1017 20:07:52.575463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:07:52.575565       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:07:52.671736       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:07:52.671830       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:07:52.671850       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0c97fc08388e70c856c936895f529c1a760925d708cce00a9944a4dd9c8d36a3] <==
	I1017 20:07:48.378068       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:07:52.303546       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:07:52.303944       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:52.315567       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:07:52.315662       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:07:52.315687       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:07:52.315720       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:07:52.335560       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:52.335646       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:52.335695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:07:52.335727       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:07:52.416003       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 20:07:52.436686       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:52.436628       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:07:55 embed-certs-572724 kubelet[770]: E1017 20:07:55.873508     770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dbe89ec1-24d2-4266-baf3-18b8fe7a333f-kube-api-access-6w7fb podName:dbe89ec1-24d2-4266-baf3-18b8fe7a333f nodeName:}" failed. No retries permitted until 2025-10-17 20:07:56.373475868 +0000 UTC m=+14.808274875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6w7fb" (UniqueName: "kubernetes.io/projected/dbe89ec1-24d2-4266-baf3-18b8fe7a333f-kube-api-access-6w7fb") pod "dashboard-metrics-scraper-6ffb444bf9-8zr5k" (UID: "dbe89ec1-24d2-4266-baf3-18b8fe7a333f") : failed to sync configmap cache: timed out waiting for the condition
	Oct 17 20:07:55 embed-certs-572724 kubelet[770]: E1017 20:07:55.883805     770 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 17 20:07:55 embed-certs-572724 kubelet[770]: E1017 20:07:55.883856     770 projected.go:196] Error preparing data for projected volume kube-api-access-lwpgw for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2gxq5: failed to sync configmap cache: timed out waiting for the condition
	Oct 17 20:07:55 embed-certs-572724 kubelet[770]: E1017 20:07:55.883928     770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5070c655-fe42-4815-a448-d8d4f574d03a-kube-api-access-lwpgw podName:5070c655-fe42-4815-a448-d8d4f574d03a nodeName:}" failed. No retries permitted until 2025-10-17 20:07:56.38390644 +0000 UTC m=+14.818705447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lwpgw" (UniqueName: "kubernetes.io/projected/5070c655-fe42-4815-a448-d8d4f574d03a-kube-api-access-lwpgw") pod "kubernetes-dashboard-855c9754f9-2gxq5" (UID: "5070c655-fe42-4815-a448-d8d4f574d03a") : failed to sync configmap cache: timed out waiting for the condition
	Oct 17 20:07:56 embed-certs-572724 kubelet[770]: W1017 20:07:56.557175     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/crio-db5e8dc78a58b07f5ee00bb673a5a988c893b543be6048b6bc49bec5241cf883 WatchSource:0}: Error finding container db5e8dc78a58b07f5ee00bb673a5a988c893b543be6048b6bc49bec5241cf883: Status 404 returned error can't find the container with id db5e8dc78a58b07f5ee00bb673a5a988c893b543be6048b6bc49bec5241cf883
	Oct 17 20:07:56 embed-certs-572724 kubelet[770]: W1017 20:07:56.579451     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6c48c7c230638f393391a474745c7291e45d4b6fe8b5512676b1bbffd3f5c59e/crio-0553efe778e12f4a5596685af577687a94aac33ea614ce1f7c2bd412ffcaffe2 WatchSource:0}: Error finding container 0553efe778e12f4a5596685af577687a94aac33ea614ce1f7c2bd412ffcaffe2: Status 404 returned error can't find the container with id 0553efe778e12f4a5596685af577687a94aac33ea614ce1f7c2bd412ffcaffe2
	Oct 17 20:08:09 embed-certs-572724 kubelet[770]: I1017 20:08:09.113586     770 scope.go:117] "RemoveContainer" containerID="a818a573bcb4067329de1d8d710f6bd33600aac6dc19092c04354c15ce05f211"
	Oct 17 20:08:09 embed-certs-572724 kubelet[770]: I1017 20:08:09.135968     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2gxq5" podStartSLOduration=8.343845766 podStartE2EDuration="15.135950104s" podCreationTimestamp="2025-10-17 20:07:54 +0000 UTC" firstStartedPulling="2025-10-17 20:07:56.574395991 +0000 UTC m=+15.009195006" lastFinishedPulling="2025-10-17 20:08:03.366500329 +0000 UTC m=+21.801299344" observedRunningTime="2025-10-17 20:08:04.116381132 +0000 UTC m=+22.551180246" watchObservedRunningTime="2025-10-17 20:08:09.135950104 +0000 UTC m=+27.570749119"
	Oct 17 20:08:10 embed-certs-572724 kubelet[770]: I1017 20:08:10.120043     770 scope.go:117] "RemoveContainer" containerID="a818a573bcb4067329de1d8d710f6bd33600aac6dc19092c04354c15ce05f211"
	Oct 17 20:08:10 embed-certs-572724 kubelet[770]: I1017 20:08:10.120824     770 scope.go:117] "RemoveContainer" containerID="2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1"
	Oct 17 20:08:10 embed-certs-572724 kubelet[770]: E1017 20:08:10.120983     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zr5k_kubernetes-dashboard(dbe89ec1-24d2-4266-baf3-18b8fe7a333f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k" podUID="dbe89ec1-24d2-4266-baf3-18b8fe7a333f"
	Oct 17 20:08:11 embed-certs-572724 kubelet[770]: I1017 20:08:11.124298     770 scope.go:117] "RemoveContainer" containerID="2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1"
	Oct 17 20:08:11 embed-certs-572724 kubelet[770]: E1017 20:08:11.125213     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zr5k_kubernetes-dashboard(dbe89ec1-24d2-4266-baf3-18b8fe7a333f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k" podUID="dbe89ec1-24d2-4266-baf3-18b8fe7a333f"
	Oct 17 20:08:16 embed-certs-572724 kubelet[770]: I1017 20:08:16.533700     770 scope.go:117] "RemoveContainer" containerID="2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1"
	Oct 17 20:08:16 embed-certs-572724 kubelet[770]: E1017 20:08:16.533908     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zr5k_kubernetes-dashboard(dbe89ec1-24d2-4266-baf3-18b8fe7a333f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k" podUID="dbe89ec1-24d2-4266-baf3-18b8fe7a333f"
	Oct 17 20:08:22 embed-certs-572724 kubelet[770]: I1017 20:08:22.154471     770 scope.go:117] "RemoveContainer" containerID="bd8bdd7d12816cda744332cb3b34ffb8e05940de2f7dada91b4a4b21564e0d39"
	Oct 17 20:08:30 embed-certs-572724 kubelet[770]: I1017 20:08:30.796644     770 scope.go:117] "RemoveContainer" containerID="2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1"
	Oct 17 20:08:31 embed-certs-572724 kubelet[770]: I1017 20:08:31.178972     770 scope.go:117] "RemoveContainer" containerID="2b392a07f26aa14a78b2c5da250bca827e7f4d45907831cdcece3d346c3e6be1"
	Oct 17 20:08:31 embed-certs-572724 kubelet[770]: I1017 20:08:31.179321     770 scope.go:117] "RemoveContainer" containerID="e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717"
	Oct 17 20:08:31 embed-certs-572724 kubelet[770]: E1017 20:08:31.179500     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zr5k_kubernetes-dashboard(dbe89ec1-24d2-4266-baf3-18b8fe7a333f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k" podUID="dbe89ec1-24d2-4266-baf3-18b8fe7a333f"
	Oct 17 20:08:36 embed-certs-572724 kubelet[770]: I1017 20:08:36.533780     770 scope.go:117] "RemoveContainer" containerID="e09ff9a6ff0e54673acb0dbb9922bee948c0f6a0cf24ad23380a636f2ce15717"
	Oct 17 20:08:36 embed-certs-572724 kubelet[770]: E1017 20:08:36.533966     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8zr5k_kubernetes-dashboard(dbe89ec1-24d2-4266-baf3-18b8fe7a333f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8zr5k" podUID="dbe89ec1-24d2-4266-baf3-18b8fe7a333f"
	Oct 17 20:08:39 embed-certs-572724 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:08:39 embed-certs-572724 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:08:39 embed-certs-572724 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [3fc1fbe7031f4ac9b13cdb2127e2a107fca355c0213ff06b195a73131962e39d] <==
	2025/10/17 20:08:03 Starting overwatch
	2025/10/17 20:08:03 Using namespace: kubernetes-dashboard
	2025/10/17 20:08:03 Using in-cluster config to connect to apiserver
	2025/10/17 20:08:03 Using secret token for csrf signing
	2025/10/17 20:08:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:08:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:08:03 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:08:03 Generating JWE encryption key
	2025/10/17 20:08:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:08:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:08:04 Initializing JWE encryption key from synchronized object
	2025/10/17 20:08:04 Creating in-cluster Sidecar client
	2025/10/17 20:08:04 Serving insecurely on HTTP port: 9090
	2025/10/17 20:08:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:08:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b339d56587d6d45e174144f0a9270220b632eb089d17efc47aa29734ab8aa116] <==
	I1017 20:08:22.253768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:08:22.265150       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:08:22.265257       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:08:22.268442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:25.724029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:29.984434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:33.583018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:36.637926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:39.660965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:39.670659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:08:39.670844       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:08:39.671291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4bb96dd9-2ce5-40c2-b9ba-fad4b582ad41", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-572724_58e2cfac-1c03-4756-a431-519f0676acfd became leader
	I1017 20:08:39.671415       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-572724_58e2cfac-1c03-4756-a431-519f0676acfd!
	W1017 20:08:39.695801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:39.709148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:08:39.774251       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-572724_58e2cfac-1c03-4756-a431-519f0676acfd!
	W1017 20:08:41.723947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:41.747965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:43.753073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:08:43.758122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bd8bdd7d12816cda744332cb3b34ffb8e05940de2f7dada91b4a4b21564e0d39] <==
	I1017 20:07:51.513499       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:08:21.564160       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-572724 -n embed-certs-572724
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-572724 -n embed-certs-572724: exit status 2 (352.065146ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-572724 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-718789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-718789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (325.252891ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:09:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-718789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-718789
helpers_test.go:243: (dbg) docker inspect newest-cni-718789:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498",
	        "Created": "2025-10-17T20:08:54.624965091Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 476080,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:08:54.706042559Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/hostname",
	        "HostsPath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/hosts",
	        "LogPath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498-json.log",
	        "Name": "/newest-cni-718789",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-718789:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-718789",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498",
	                "LowerDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-718789",
	                "Source": "/var/lib/docker/volumes/newest-cni-718789/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-718789",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-718789",
	                "name.minikube.sigs.k8s.io": "newest-cni-718789",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb7234057f9f5bad725127fdfd2c3d2ec5ed858f830950b516e6e922ddbe7274",
	            "SandboxKey": "/var/run/docker/netns/bb7234057f9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-718789": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:e1:f9:8e:39:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8cd2eedf95aa208e706bcc7b2b128ff9ad782ac6990bd5bc75c6c1730d2dbe6",
	                    "EndpointID": "732ed8896b0bd1e6b0780d603671b738dad0bc83ed7db9e04ead5c0eeddb1f75",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-718789",
	                        "637fa246d690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718789 -n newest-cni-718789
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-718789 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-718789 logs -n 25: (1.175195856s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-164379 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164379       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ delete  │ -p old-k8s-version-135652                                                                                                                                                                                                                     │ old-k8s-version-135652       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:06 UTC │
	│ delete  │ -p cert-expiration-164379                                                                                                                                                                                                                     │ cert-expiration-164379       │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:05 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p no-preload-413711 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p no-preload-413711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ stop    │ -p embed-certs-572724 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-572724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ image   │ no-preload-413711 image list --format=json                                                                                                                                                                                                    │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ pause   │ -p no-preload-413711 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-672422                                                                                                                                                                                                               │ disable-driver-mounts-672422 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:09 UTC │
	│ image   │ embed-certs-572724 image list --format=json                                                                                                                                                                                                   │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ pause   │ -p embed-certs-572724 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:08:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:08:49.155305  475692 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:08:49.155449  475692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:49.155475  475692 out.go:374] Setting ErrFile to fd 2...
	I1017 20:08:49.155499  475692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:08:49.155761  475692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:08:49.156208  475692 out.go:368] Setting JSON to false
	I1017 20:08:49.157204  475692 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10280,"bootTime":1760721449,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:08:49.157269  475692 start.go:141] virtualization:  
	I1017 20:08:49.163336  475692 out.go:179] * [newest-cni-718789] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:08:49.166569  475692 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:08:49.166681  475692 notify.go:220] Checking for updates...
	I1017 20:08:49.173036  475692 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:08:49.176185  475692 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:08:49.179177  475692 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:08:49.182468  475692 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:08:49.185574  475692 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:08:49.189138  475692 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:08:49.189275  475692 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:08:49.216649  475692 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:08:49.216790  475692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:08:49.280926  475692 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:08:49.269361134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:08:49.281037  475692 docker.go:318] overlay module found
	I1017 20:08:49.285901  475692 out.go:179] * Using the docker driver based on user configuration
	I1017 20:08:49.288729  475692 start.go:305] selected driver: docker
	I1017 20:08:49.288751  475692 start.go:925] validating driver "docker" against <nil>
	I1017 20:08:49.288766  475692 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:08:49.289498  475692 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:08:49.347594  475692 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:08:49.338508194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:08:49.347761  475692 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1017 20:08:49.347793  475692 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1017 20:08:49.348051  475692 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:08:49.350772  475692 out.go:179] * Using Docker driver with root privileges
	I1017 20:08:49.353567  475692 cni.go:84] Creating CNI manager for ""
	I1017 20:08:49.353643  475692 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:08:49.353658  475692 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:08:49.353739  475692 start.go:349] cluster config:
	{Name:newest-cni-718789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:08:49.356822  475692 out.go:179] * Starting "newest-cni-718789" primary control-plane node in "newest-cni-718789" cluster
	I1017 20:08:49.359556  475692 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:08:49.362368  475692 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:08:49.365064  475692 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:08:49.365118  475692 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:08:49.365130  475692 cache.go:58] Caching tarball of preloaded images
	I1017 20:08:49.365167  475692 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:08:49.365251  475692 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:08:49.365261  475692 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:08:49.365368  475692 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/config.json ...
	I1017 20:08:49.365386  475692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/config.json: {Name:mk2be392ff94ad62c16a6972165d51b2be76596e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:08:49.384366  475692 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:08:49.384388  475692 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:08:49.384406  475692 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:08:49.384430  475692 start.go:360] acquireMachinesLock for newest-cni-718789: {Name:mk25e52e47b384e7eeae83275e6a385fb152458a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:08:49.384574  475692 start.go:364] duration metric: took 123.238µs to acquireMachinesLock for "newest-cni-718789"
	I1017 20:08:49.384606  475692 start.go:93] Provisioning new machine with config: &{Name:newest-cni-718789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:08:49.384683  475692 start.go:125] createHost starting for "" (driver="docker")
	W1017 20:08:46.562425  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	W1017 20:08:49.061538  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	W1017 20:08:51.062015  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	I1017 20:08:49.388026  475692 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:08:49.388255  475692 start.go:159] libmachine.API.Create for "newest-cni-718789" (driver="docker")
	I1017 20:08:49.388306  475692 client.go:168] LocalClient.Create starting
	I1017 20:08:49.388377  475692 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem
	I1017 20:08:49.388425  475692 main.go:141] libmachine: Decoding PEM data...
	I1017 20:08:49.388438  475692 main.go:141] libmachine: Parsing certificate...
	I1017 20:08:49.388498  475692 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem
	I1017 20:08:49.388556  475692 main.go:141] libmachine: Decoding PEM data...
	I1017 20:08:49.388572  475692 main.go:141] libmachine: Parsing certificate...
	I1017 20:08:49.388944  475692 cli_runner.go:164] Run: docker network inspect newest-cni-718789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:08:49.409803  475692 cli_runner.go:211] docker network inspect newest-cni-718789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:08:49.409882  475692 network_create.go:284] running [docker network inspect newest-cni-718789] to gather additional debugging logs...
	I1017 20:08:49.409913  475692 cli_runner.go:164] Run: docker network inspect newest-cni-718789
	W1017 20:08:49.425872  475692 cli_runner.go:211] docker network inspect newest-cni-718789 returned with exit code 1
	I1017 20:08:49.425907  475692 network_create.go:287] error running [docker network inspect newest-cni-718789]: docker network inspect newest-cni-718789: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-718789 not found
	I1017 20:08:49.425937  475692 network_create.go:289] output of [docker network inspect newest-cni-718789]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-718789 not found
	
	** /stderr **
	I1017 20:08:49.426037  475692 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:08:49.443774  475692 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f667d9c3ea2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:fc:1d:c6:d2:da} reservation:<nil>}
	I1017 20:08:49.444065  475692 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-82a22734829b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:5a:78:c5:e0:0a} reservation:<nil>}
	I1017 20:08:49.444638  475692 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b88bd3b523f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:75:74:cd:15:9b} reservation:<nil>}
	I1017 20:08:49.444980  475692 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b07c93b74ead IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:cc:0a:13:a9:64} reservation:<nil>}
	I1017 20:08:49.445485  475692 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a05810}
	I1017 20:08:49.445514  475692 network_create.go:124] attempt to create docker network newest-cni-718789 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1017 20:08:49.445593  475692 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-718789 newest-cni-718789
	I1017 20:08:49.505263  475692 network_create.go:108] docker network newest-cni-718789 192.168.85.0/24 created
	I1017 20:08:49.505305  475692 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-718789" container
	I1017 20:08:49.505378  475692 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:08:49.525749  475692 cli_runner.go:164] Run: docker volume create newest-cni-718789 --label name.minikube.sigs.k8s.io=newest-cni-718789 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:08:49.543456  475692 oci.go:103] Successfully created a docker volume newest-cni-718789
	I1017 20:08:49.543547  475692 cli_runner.go:164] Run: docker run --rm --name newest-cni-718789-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-718789 --entrypoint /usr/bin/test -v newest-cni-718789:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:08:50.128457  475692 oci.go:107] Successfully prepared a docker volume newest-cni-718789
	I1017 20:08:50.128497  475692 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:08:50.128706  475692 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:08:50.128805  475692 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-718789:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 20:08:53.561387  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	W1017 20:08:56.061410  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	I1017 20:08:54.546187  475692 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-718789:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.41734147s)
	I1017 20:08:54.546217  475692 kic.go:203] duration metric: took 4.417696538s to extract preloaded images to volume ...
	W1017 20:08:54.546353  475692 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 20:08:54.546469  475692 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:08:54.609735  475692 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-718789 --name newest-cni-718789 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-718789 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-718789 --network newest-cni-718789 --ip 192.168.85.2 --volume newest-cni-718789:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:08:54.943489  475692 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Running}}
	I1017 20:08:54.971354  475692 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:08:54.992442  475692 cli_runner.go:164] Run: docker exec newest-cni-718789 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:08:55.058520  475692 oci.go:144] the created container "newest-cni-718789" has a running status.
	I1017 20:08:55.058555  475692 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa...
	I1017 20:08:55.374608  475692 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:08:55.401468  475692 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:08:55.424118  475692 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:08:55.424141  475692 kic_runner.go:114] Args: [docker exec --privileged newest-cni-718789 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:08:55.484066  475692 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:08:55.505869  475692 machine.go:93] provisionDockerMachine start ...
	I1017 20:08:55.506099  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:08:55.540054  475692 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:55.540401  475692 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1017 20:08:55.540418  475692 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:08:55.541116  475692 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:08:58.688115  475692 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718789
	
	I1017 20:08:58.688141  475692 ubuntu.go:182] provisioning hostname "newest-cni-718789"
	I1017 20:08:58.688225  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:08:58.705505  475692 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:58.705848  475692 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1017 20:08:58.705868  475692 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-718789 && echo "newest-cni-718789" | sudo tee /etc/hostname
	I1017 20:08:58.865261  475692 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718789
	
	I1017 20:08:58.865382  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:08:58.882744  475692 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:58.883060  475692 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1017 20:08:58.883084  475692 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-718789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-718789/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-718789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:08:59.032559  475692 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:08:59.032587  475692 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:08:59.032617  475692 ubuntu.go:190] setting up certificates
	I1017 20:08:59.032625  475692 provision.go:84] configureAuth start
	I1017 20:08:59.032683  475692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718789
	I1017 20:08:59.050999  475692 provision.go:143] copyHostCerts
	I1017 20:08:59.051071  475692 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:08:59.051087  475692 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:08:59.051169  475692 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:08:59.051274  475692 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:08:59.051285  475692 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:08:59.051313  475692 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:08:59.051380  475692 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:08:59.051389  475692 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:08:59.051420  475692 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:08:59.051481  475692 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.newest-cni-718789 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-718789]
	W1017 20:08:58.062260  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	W1017 20:09:00.063980  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	I1017 20:08:59.190256  475692 provision.go:177] copyRemoteCerts
	I1017 20:08:59.190329  475692 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:08:59.190373  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:08:59.207171  475692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:08:59.311945  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:08:59.330360  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:08:59.348598  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:08:59.367002  475692 provision.go:87] duration metric: took 334.362525ms to configureAuth
	I1017 20:08:59.367026  475692 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:08:59.367254  475692 config.go:182] Loaded profile config "newest-cni-718789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:08:59.367361  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:08:59.383790  475692 main.go:141] libmachine: Using SSH client type: native
	I1017 20:08:59.384097  475692 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1017 20:08:59.384111  475692 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:08:59.659515  475692 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:08:59.659543  475692 machine.go:96] duration metric: took 4.153540403s to provisionDockerMachine
	I1017 20:08:59.659553  475692 client.go:171] duration metric: took 10.271235325s to LocalClient.Create
	I1017 20:08:59.659568  475692 start.go:167] duration metric: took 10.271314109s to libmachine.API.Create "newest-cni-718789"
	I1017 20:08:59.659575  475692 start.go:293] postStartSetup for "newest-cni-718789" (driver="docker")
	I1017 20:08:59.659585  475692 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:08:59.659660  475692 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:08:59.659711  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:08:59.677233  475692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:08:59.780160  475692 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:08:59.783142  475692 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:08:59.783168  475692 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:08:59.783179  475692 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:08:59.783232  475692 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:08:59.783309  475692 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:08:59.783408  475692 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:08:59.790814  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:08:59.808057  475692 start.go:296] duration metric: took 148.451874ms for postStartSetup
	I1017 20:08:59.808465  475692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718789
	I1017 20:08:59.825123  475692 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/config.json ...
	I1017 20:08:59.825427  475692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:08:59.825475  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:08:59.841625  475692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:08:59.941660  475692 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:08:59.946730  475692 start.go:128] duration metric: took 10.562032876s to createHost
	I1017 20:08:59.946811  475692 start.go:83] releasing machines lock for "newest-cni-718789", held for 10.562223221s
	I1017 20:08:59.946915  475692 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718789
	I1017 20:08:59.964782  475692 ssh_runner.go:195] Run: cat /version.json
	I1017 20:08:59.964823  475692 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:08:59.964849  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:08:59.964875  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:08:59.987036  475692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:00.007292  475692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:00.141106  475692 ssh_runner.go:195] Run: systemctl --version
	I1017 20:09:00.330671  475692 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:09:00.387554  475692 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:09:00.394509  475692 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:09:00.394681  475692 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:09:00.434527  475692 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 20:09:00.434606  475692 start.go:495] detecting cgroup driver to use...
	I1017 20:09:00.434694  475692 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:09:00.434822  475692 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:09:00.455983  475692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:09:00.472510  475692 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:09:00.472686  475692 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:09:00.496012  475692 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:09:00.517824  475692 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:09:00.635427  475692 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:09:00.752386  475692 docker.go:234] disabling docker service ...
	I1017 20:09:00.752450  475692 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:09:00.775298  475692 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:09:00.788953  475692 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:09:00.906800  475692 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:09:01.031279  475692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:09:01.044926  475692 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:09:01.059559  475692 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:09:01.059668  475692 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:01.070043  475692 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:09:01.070146  475692 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:01.079409  475692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:01.088955  475692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:01.099714  475692 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:09:01.110283  475692 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:01.119344  475692 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:01.136082  475692 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:01.146193  475692 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:09:01.154392  475692 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:09:01.162465  475692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:01.305505  475692 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:09:01.451724  475692 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:09:01.451808  475692 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:09:01.456263  475692 start.go:563] Will wait 60s for crictl version
	I1017 20:09:01.456329  475692 ssh_runner.go:195] Run: which crictl
	I1017 20:09:01.460750  475692 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:09:01.492145  475692 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:09:01.492239  475692 ssh_runner.go:195] Run: crio --version
	I1017 20:09:01.526553  475692 ssh_runner.go:195] Run: crio --version
	I1017 20:09:01.568629  475692 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:09:01.571561  475692 cli_runner.go:164] Run: docker network inspect newest-cni-718789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:09:01.589487  475692 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:09:01.593644  475692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:09:01.607212  475692 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 20:09:01.609976  475692 kubeadm.go:883] updating cluster {Name:newest-cni-718789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:09:01.610136  475692 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:01.610227  475692 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:09:01.644814  475692 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:09:01.644843  475692 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:09:01.644898  475692 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:09:01.670918  475692 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:09:01.670942  475692 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:09:01.670950  475692 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 20:09:01.671031  475692 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-718789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:09:01.671111  475692 ssh_runner.go:195] Run: crio config
	I1017 20:09:01.748656  475692 cni.go:84] Creating CNI manager for ""
	I1017 20:09:01.748682  475692 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:01.748709  475692 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 20:09:01.748737  475692 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-718789 NodeName:newest-cni-718789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:09:01.748882  475692 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-718789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:09:01.748964  475692 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:09:01.757313  475692 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:09:01.757443  475692 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:09:01.765036  475692 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 20:09:01.778642  475692 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:09:01.791595  475692 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1017 20:09:01.804784  475692 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:09:01.808647  475692 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:09:01.820867  475692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:01.939482  475692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:09:01.955298  475692 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789 for IP: 192.168.85.2
	I1017 20:09:01.955373  475692 certs.go:195] generating shared ca certs ...
	I1017 20:09:01.955404  475692 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:01.955578  475692 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:09:01.955643  475692 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:09:01.955665  475692 certs.go:257] generating profile certs ...
	I1017 20:09:01.955758  475692 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/client.key
	I1017 20:09:01.955802  475692 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/client.crt with IP's: []
	I1017 20:09:02.424491  475692 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/client.crt ...
	I1017 20:09:02.424532  475692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/client.crt: {Name:mkc69128f41698764a737e4e559c341ee3cc8a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:02.424740  475692 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/client.key ...
	I1017 20:09:02.424755  475692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/client.key: {Name:mk8bf8c054854d786b48eefb2222a0387ee89d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:02.424858  475692 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.key.2d8ce425
	I1017 20:09:02.424876  475692 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.crt.2d8ce425 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 20:09:03.163171  475692 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.crt.2d8ce425 ...
	I1017 20:09:03.163202  475692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.crt.2d8ce425: {Name:mk626de04464d27d6ff9639b24f542b70674a2b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:03.163390  475692 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.key.2d8ce425 ...
	I1017 20:09:03.163404  475692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.key.2d8ce425: {Name:mkbd8628cf555c541f292692d104e7b63b2a18f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:03.163487  475692 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.crt.2d8ce425 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.crt
	I1017 20:09:03.163573  475692 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.key.2d8ce425 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.key
	I1017 20:09:03.163638  475692 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.key
	I1017 20:09:03.163657  475692 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.crt with IP's: []
	I1017 20:09:04.100587  475692 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.crt ...
	I1017 20:09:04.100618  475692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.crt: {Name:mkcfc347252c893a0a7a7a23037ce7998d9299ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:04.100808  475692 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.key ...
	I1017 20:09:04.100823  475692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.key: {Name:mk94b40410aa13652e65d9195e0683ec785e80be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:04.101042  475692 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:09:04.101091  475692 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:09:04.101106  475692 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:09:04.101132  475692 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:09:04.101161  475692 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:09:04.101189  475692 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:09:04.101262  475692 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:09:04.101887  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:09:04.120540  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:09:04.140050  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:09:04.158054  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:09:04.175538  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 20:09:04.194752  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:09:04.215384  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:09:04.233021  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:09:04.259867  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:09:04.278861  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:09:04.301799  475692 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:09:04.319300  475692 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:09:04.332443  475692 ssh_runner.go:195] Run: openssl version
	I1017 20:09:04.339393  475692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:09:04.347810  475692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:09:04.354672  475692 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:09:04.354778  475692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:09:04.396085  475692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:09:04.404744  475692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:09:04.413116  475692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:09:04.417013  475692 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:09:04.417121  475692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:09:04.457964  475692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:09:04.466372  475692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:09:04.474435  475692 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:04.478204  475692 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:04.478287  475692 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:04.519061  475692 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:09:04.527421  475692 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:09:04.531042  475692 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:09:04.531113  475692 kubeadm.go:400] StartCluster: {Name:newest-cni-718789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:04.531202  475692 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:09:04.531296  475692 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:09:04.556895  475692 cri.go:89] found id: ""
	I1017 20:09:04.557014  475692 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:09:04.566642  475692 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:09:04.574534  475692 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:09:04.574625  475692 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:09:04.582407  475692 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:09:04.582429  475692 kubeadm.go:157] found existing configuration files:
	
	I1017 20:09:04.582503  475692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:09:04.590173  475692 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:09:04.590249  475692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:09:04.598181  475692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:09:04.605554  475692 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:09:04.605645  475692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:09:04.612811  475692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:09:04.620662  475692 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:09:04.620728  475692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:09:04.627818  475692 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:09:04.635346  475692 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:09:04.635459  475692 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:09:04.643178  475692 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:09:04.682679  475692 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:09:04.682742  475692 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:09:04.711178  475692 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:09:04.711255  475692 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 20:09:04.711298  475692 kubeadm.go:318] OS: Linux
	I1017 20:09:04.711348  475692 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:09:04.711404  475692 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 20:09:04.711465  475692 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:09:04.711519  475692 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:09:04.711573  475692 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:09:04.711635  475692 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:09:04.711710  475692 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:09:04.711809  475692 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:09:04.711922  475692 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 20:09:04.779088  475692 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:09:04.779273  475692 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:09:04.779407  475692 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:09:04.786646  475692 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1017 20:09:02.562637  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	W1017 20:09:05.061834  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	I1017 20:09:04.792441  475692 out.go:252]   - Generating certificates and keys ...
	I1017 20:09:04.792639  475692 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:09:04.792750  475692 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:09:05.382389  475692 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:09:05.804083  475692 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:09:06.948178  475692 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:09:07.068306  475692 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 20:09:07.911513  475692 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:09:07.911873  475692 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-718789] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:09:08.009986  475692 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:09:08.010289  475692 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-718789] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:09:08.814943  475692 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	W1017 20:09:07.062296  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	W1017 20:09:09.561675  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	I1017 20:09:09.182677  475692 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:09:09.390134  475692 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:09:09.390400  475692 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:09:09.593487  475692 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:09:09.765784  475692 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:09:10.381873  475692 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:09:11.221037  475692 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:09:11.447683  475692 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:09:11.447784  475692 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:09:11.449574  475692 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 20:09:11.452940  475692 out.go:252]   - Booting up control plane ...
	I1017 20:09:11.453046  475692 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:09:11.453123  475692 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:09:11.453189  475692 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:09:11.469826  475692 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:09:11.470177  475692 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:09:11.480766  475692 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:09:11.480883  475692 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:09:11.480971  475692 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:09:11.620671  475692 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:09:11.620795  475692 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:09:12.621601  475692 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002537452s
	I1017 20:09:12.625655  475692 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:09:12.625758  475692 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1017 20:09:12.625854  475692 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:09:12.625937  475692 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1017 20:09:11.563141  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	W1017 20:09:14.062089  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	I1017 20:09:16.117133  475692 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.490438111s
	I1017 20:09:17.330524  475692 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.704779505s
	I1017 20:09:19.127061  475692 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501340657s
	I1017 20:09:19.147136  475692 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:09:19.162230  475692 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:09:19.176558  475692 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:09:19.176771  475692 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-718789 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:09:19.189029  475692 kubeadm.go:318] [bootstrap-token] Using token: fcau3n.abmtwlvw7u0jmbjw
	I1017 20:09:19.191992  475692 out.go:252]   - Configuring RBAC rules ...
	I1017 20:09:19.192117  475692 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:09:19.198929  475692 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:09:19.215631  475692 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:09:19.223664  475692 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:09:19.230425  475692 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:09:19.238342  475692 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:09:19.536615  475692 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:09:19.972762  475692 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:09:20.534684  475692 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:09:20.535790  475692 kubeadm.go:318] 
	I1017 20:09:20.535868  475692 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:09:20.535879  475692 kubeadm.go:318] 
	I1017 20:09:20.535960  475692 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:09:20.535969  475692 kubeadm.go:318] 
	I1017 20:09:20.535996  475692 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:09:20.536060  475692 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:09:20.536116  475692 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:09:20.536125  475692 kubeadm.go:318] 
	I1017 20:09:20.536182  475692 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:09:20.536196  475692 kubeadm.go:318] 
	I1017 20:09:20.536246  475692 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:09:20.536255  475692 kubeadm.go:318] 
	I1017 20:09:20.536309  475692 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:09:20.536391  475692 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:09:20.536476  475692 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:09:20.536487  475692 kubeadm.go:318] 
	I1017 20:09:20.536607  475692 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:09:20.536694  475692 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:09:20.536704  475692 kubeadm.go:318] 
	I1017 20:09:20.536791  475692 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fcau3n.abmtwlvw7u0jmbjw \
	I1017 20:09:20.536903  475692 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 \
	I1017 20:09:20.536928  475692 kubeadm.go:318] 	--control-plane 
	I1017 20:09:20.536936  475692 kubeadm.go:318] 
	I1017 20:09:20.537024  475692 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:09:20.537032  475692 kubeadm.go:318] 
	I1017 20:09:20.537401  475692 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fcau3n.abmtwlvw7u0jmbjw \
	I1017 20:09:20.537512  475692 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 
	I1017 20:09:20.542231  475692 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 20:09:20.542470  475692 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 20:09:20.542585  475692 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:09:20.542605  475692 cni.go:84] Creating CNI manager for ""
	I1017 20:09:20.542613  475692 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:20.545873  475692 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1017 20:09:16.562560  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	W1017 20:09:19.061505  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	W1017 20:09:21.063702  471476 node_ready.go:57] node "default-k8s-diff-port-740780" has "Ready":"False" status (will retry)
	I1017 20:09:20.548726  475692 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:09:20.553721  475692 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:09:20.553742  475692 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:09:20.569120  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:09:20.875679  475692 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:09:20.875822  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:20.875901  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-718789 minikube.k8s.io/updated_at=2025_10_17T20_09_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=newest-cni-718789 minikube.k8s.io/primary=true
	I1017 20:09:21.068591  475692 ops.go:34] apiserver oom_adj: -16
	I1017 20:09:21.068699  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:21.569463  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:22.069394  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:22.569505  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:23.069692  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:23.568822  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:24.068823  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:24.569422  475692 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:09:24.670195  475692 kubeadm.go:1113] duration metric: took 3.794411063s to wait for elevateKubeSystemPrivileges
	I1017 20:09:24.670223  475692 kubeadm.go:402] duration metric: took 20.139130873s to StartCluster
	I1017 20:09:24.670240  475692 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:24.670301  475692 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:09:24.671229  475692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:24.671460  475692 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:09:24.671536  475692 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:09:24.671801  475692 config.go:182] Loaded profile config "newest-cni-718789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:24.671838  475692 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:09:24.671894  475692 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-718789"
	I1017 20:09:24.671908  475692 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-718789"
	I1017 20:09:24.671929  475692 host.go:66] Checking if "newest-cni-718789" exists ...
	I1017 20:09:24.672703  475692 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:24.673268  475692 addons.go:69] Setting default-storageclass=true in profile "newest-cni-718789"
	I1017 20:09:24.673299  475692 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-718789"
	I1017 20:09:24.673608  475692 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:24.674707  475692 out.go:179] * Verifying Kubernetes components...
	I1017 20:09:24.678346  475692 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:24.720405  475692 addons.go:238] Setting addon default-storageclass=true in "newest-cni-718789"
	I1017 20:09:24.720454  475692 host.go:66] Checking if "newest-cni-718789" exists ...
	I1017 20:09:24.720983  475692 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:24.721395  475692 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:09:22.061692  471476 node_ready.go:49] node "default-k8s-diff-port-740780" is "Ready"
	I1017 20:09:22.061718  471476 node_ready.go:38] duration metric: took 40.003226174s for node "default-k8s-diff-port-740780" to be "Ready" ...
	I1017 20:09:22.061731  471476 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:09:22.061790  471476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:09:22.121169  471476 api_server.go:72] duration metric: took 41.289057094s to wait for apiserver process to appear ...
	I1017 20:09:22.121190  471476 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:09:22.121209  471476 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1017 20:09:22.135343  471476 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1017 20:09:22.137288  471476 api_server.go:141] control plane version: v1.34.1
	I1017 20:09:22.137312  471476 api_server.go:131] duration metric: took 16.1162ms to wait for apiserver health ...
	I1017 20:09:22.137321  471476 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:09:22.143984  471476 system_pods.go:59] 8 kube-system pods found
	I1017 20:09:22.144016  471476 system_pods.go:61] "coredns-66bc5c9577-6mknt" [15647d52-61fb-4af6-8d28-66da6ebd0923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:09:22.144023  471476 system_pods.go:61] "etcd-default-k8s-diff-port-740780" [6a636316-c994-44d8-b608-0c1cfa06bd55] Running
	I1017 20:09:22.144029  471476 system_pods.go:61] "kindnet-fnx26" [16e1d707-7d88-4317-ab9f-dd7698ee1cd1] Running
	I1017 20:09:22.144033  471476 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-740780" [7e36f4e9-953c-457d-b6bf-b26ac987ab87] Running
	I1017 20:09:22.144038  471476 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-740780" [9e5bfd14-bb31-4668-a9db-6278ca49ae54] Running
	I1017 20:09:22.144043  471476 system_pods.go:61] "kube-proxy-8x772" [19f55ff7-64eb-4407-9168-aa18ddbe543c] Running
	I1017 20:09:22.144047  471476 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-740780" [44223246-1f61-4365-98a5-c3820458e28a] Running
	I1017 20:09:22.144054  471476 system_pods.go:61] "storage-provisioner" [f0266236-3025-407f-ae0f-c4e9e5ae8ff0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:09:22.144059  471476 system_pods.go:74] duration metric: took 6.733405ms to wait for pod list to return data ...
	I1017 20:09:22.144067  471476 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:09:22.147370  471476 default_sa.go:45] found service account: "default"
	I1017 20:09:22.147438  471476 default_sa.go:55] duration metric: took 3.36452ms for default service account to be created ...
	I1017 20:09:22.147463  471476 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:09:22.151387  471476 system_pods.go:86] 8 kube-system pods found
	I1017 20:09:22.151499  471476 system_pods.go:89] "coredns-66bc5c9577-6mknt" [15647d52-61fb-4af6-8d28-66da6ebd0923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:09:22.151535  471476 system_pods.go:89] "etcd-default-k8s-diff-port-740780" [6a636316-c994-44d8-b608-0c1cfa06bd55] Running
	I1017 20:09:22.151562  471476 system_pods.go:89] "kindnet-fnx26" [16e1d707-7d88-4317-ab9f-dd7698ee1cd1] Running
	I1017 20:09:22.151584  471476 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-740780" [7e36f4e9-953c-457d-b6bf-b26ac987ab87] Running
	I1017 20:09:22.151619  471476 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-740780" [9e5bfd14-bb31-4668-a9db-6278ca49ae54] Running
	I1017 20:09:22.151644  471476 system_pods.go:89] "kube-proxy-8x772" [19f55ff7-64eb-4407-9168-aa18ddbe543c] Running
	I1017 20:09:22.151664  471476 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-740780" [44223246-1f61-4365-98a5-c3820458e28a] Running
	I1017 20:09:22.151702  471476 system_pods.go:89] "storage-provisioner" [f0266236-3025-407f-ae0f-c4e9e5ae8ff0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 20:09:22.151739  471476 retry.go:31] will retry after 261.41749ms: missing components: kube-dns
	I1017 20:09:22.420247  471476 system_pods.go:86] 8 kube-system pods found
	I1017 20:09:22.420334  471476 system_pods.go:89] "coredns-66bc5c9577-6mknt" [15647d52-61fb-4af6-8d28-66da6ebd0923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:09:22.420359  471476 system_pods.go:89] "etcd-default-k8s-diff-port-740780" [6a636316-c994-44d8-b608-0c1cfa06bd55] Running
	I1017 20:09:22.420397  471476 system_pods.go:89] "kindnet-fnx26" [16e1d707-7d88-4317-ab9f-dd7698ee1cd1] Running
	I1017 20:09:22.420421  471476 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-740780" [7e36f4e9-953c-457d-b6bf-b26ac987ab87] Running
	I1017 20:09:22.420442  471476 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-740780" [9e5bfd14-bb31-4668-a9db-6278ca49ae54] Running
	I1017 20:09:22.420480  471476 system_pods.go:89] "kube-proxy-8x772" [19f55ff7-64eb-4407-9168-aa18ddbe543c] Running
	I1017 20:09:22.420502  471476 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-740780" [44223246-1f61-4365-98a5-c3820458e28a] Running
	I1017 20:09:22.420570  471476 system_pods.go:89] "storage-provisioner" [f0266236-3025-407f-ae0f-c4e9e5ae8ff0] Running
	I1017 20:09:22.420605  471476 retry.go:31] will retry after 317.603423ms: missing components: kube-dns
	I1017 20:09:22.742097  471476 system_pods.go:86] 8 kube-system pods found
	I1017 20:09:22.742134  471476 system_pods.go:89] "coredns-66bc5c9577-6mknt" [15647d52-61fb-4af6-8d28-66da6ebd0923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:09:22.742141  471476 system_pods.go:89] "etcd-default-k8s-diff-port-740780" [6a636316-c994-44d8-b608-0c1cfa06bd55] Running
	I1017 20:09:22.742147  471476 system_pods.go:89] "kindnet-fnx26" [16e1d707-7d88-4317-ab9f-dd7698ee1cd1] Running
	I1017 20:09:22.742190  471476 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-740780" [7e36f4e9-953c-457d-b6bf-b26ac987ab87] Running
	I1017 20:09:22.742203  471476 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-740780" [9e5bfd14-bb31-4668-a9db-6278ca49ae54] Running
	I1017 20:09:22.742209  471476 system_pods.go:89] "kube-proxy-8x772" [19f55ff7-64eb-4407-9168-aa18ddbe543c] Running
	I1017 20:09:22.742213  471476 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-740780" [44223246-1f61-4365-98a5-c3820458e28a] Running
	I1017 20:09:22.742217  471476 system_pods.go:89] "storage-provisioner" [f0266236-3025-407f-ae0f-c4e9e5ae8ff0] Running
	I1017 20:09:22.742244  471476 retry.go:31] will retry after 458.19074ms: missing components: kube-dns
	I1017 20:09:23.203641  471476 system_pods.go:86] 8 kube-system pods found
	I1017 20:09:23.203675  471476 system_pods.go:89] "coredns-66bc5c9577-6mknt" [15647d52-61fb-4af6-8d28-66da6ebd0923] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:09:23.203682  471476 system_pods.go:89] "etcd-default-k8s-diff-port-740780" [6a636316-c994-44d8-b608-0c1cfa06bd55] Running
	I1017 20:09:23.203710  471476 system_pods.go:89] "kindnet-fnx26" [16e1d707-7d88-4317-ab9f-dd7698ee1cd1] Running
	I1017 20:09:23.203720  471476 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-740780" [7e36f4e9-953c-457d-b6bf-b26ac987ab87] Running
	I1017 20:09:23.203725  471476 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-740780" [9e5bfd14-bb31-4668-a9db-6278ca49ae54] Running
	I1017 20:09:23.203732  471476 system_pods.go:89] "kube-proxy-8x772" [19f55ff7-64eb-4407-9168-aa18ddbe543c] Running
	I1017 20:09:23.203737  471476 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-740780" [44223246-1f61-4365-98a5-c3820458e28a] Running
	I1017 20:09:23.203743  471476 system_pods.go:89] "storage-provisioner" [f0266236-3025-407f-ae0f-c4e9e5ae8ff0] Running
	I1017 20:09:23.203758  471476 retry.go:31] will retry after 550.184615ms: missing components: kube-dns
	I1017 20:09:23.758082  471476 system_pods.go:86] 8 kube-system pods found
	I1017 20:09:23.758117  471476 system_pods.go:89] "coredns-66bc5c9577-6mknt" [15647d52-61fb-4af6-8d28-66da6ebd0923] Running
	I1017 20:09:23.758124  471476 system_pods.go:89] "etcd-default-k8s-diff-port-740780" [6a636316-c994-44d8-b608-0c1cfa06bd55] Running
	I1017 20:09:23.758129  471476 system_pods.go:89] "kindnet-fnx26" [16e1d707-7d88-4317-ab9f-dd7698ee1cd1] Running
	I1017 20:09:23.758134  471476 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-740780" [7e36f4e9-953c-457d-b6bf-b26ac987ab87] Running
	I1017 20:09:23.758139  471476 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-740780" [9e5bfd14-bb31-4668-a9db-6278ca49ae54] Running
	I1017 20:09:23.758143  471476 system_pods.go:89] "kube-proxy-8x772" [19f55ff7-64eb-4407-9168-aa18ddbe543c] Running
	I1017 20:09:23.758147  471476 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-740780" [44223246-1f61-4365-98a5-c3820458e28a] Running
	I1017 20:09:23.758150  471476 system_pods.go:89] "storage-provisioner" [f0266236-3025-407f-ae0f-c4e9e5ae8ff0] Running
	I1017 20:09:23.758158  471476 system_pods.go:126] duration metric: took 1.610676189s to wait for k8s-apps to be running ...
	I1017 20:09:23.758176  471476 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:09:23.758232  471476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:09:23.771147  471476 system_svc.go:56] duration metric: took 12.960962ms WaitForService to wait for kubelet
	I1017 20:09:23.771176  471476 kubeadm.go:586] duration metric: took 42.939069449s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:09:23.771194  471476 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:09:23.774306  471476 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:09:23.774338  471476 node_conditions.go:123] node cpu capacity is 2
	I1017 20:09:23.774352  471476 node_conditions.go:105] duration metric: took 3.152309ms to run NodePressure ...
	I1017 20:09:23.774365  471476 start.go:241] waiting for startup goroutines ...
	I1017 20:09:23.774372  471476 start.go:246] waiting for cluster config update ...
	I1017 20:09:23.774389  471476 start.go:255] writing updated cluster config ...
	I1017 20:09:23.774672  471476 ssh_runner.go:195] Run: rm -f paused
	I1017 20:09:23.778358  471476 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:09:23.781910  471476 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6mknt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:23.786748  471476 pod_ready.go:94] pod "coredns-66bc5c9577-6mknt" is "Ready"
	I1017 20:09:23.786776  471476 pod_ready.go:86] duration metric: took 4.805195ms for pod "coredns-66bc5c9577-6mknt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:23.791153  471476 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:23.797302  471476 pod_ready.go:94] pod "etcd-default-k8s-diff-port-740780" is "Ready"
	I1017 20:09:23.797329  471476 pod_ready.go:86] duration metric: took 6.149511ms for pod "etcd-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:23.799567  471476 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:23.804054  471476 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-740780" is "Ready"
	I1017 20:09:23.804081  471476 pod_ready.go:86] duration metric: took 4.48778ms for pod "kube-apiserver-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:23.806316  471476 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:24.182052  471476 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-740780" is "Ready"
	I1017 20:09:24.182085  471476 pod_ready.go:86] duration metric: took 375.746162ms for pod "kube-controller-manager-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:24.382917  471476 pod_ready.go:83] waiting for pod "kube-proxy-8x772" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:24.783175  471476 pod_ready.go:94] pod "kube-proxy-8x772" is "Ready"
	I1017 20:09:24.783207  471476 pod_ready.go:86] duration metric: took 400.216826ms for pod "kube-proxy-8x772" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:24.983220  471476 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:25.382596  471476 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-740780" is "Ready"
	I1017 20:09:25.382627  471476 pod_ready.go:86] duration metric: took 399.379482ms for pod "kube-scheduler-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:09:25.382640  471476 pod_ready.go:40] duration metric: took 1.604251747s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:09:25.483522  471476 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:09:25.486932  471476 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-740780" cluster and "default" namespace by default
	I1017 20:09:24.724302  475692 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:09:24.724325  475692 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:09:24.724383  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:24.764790  475692 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:09:24.764823  475692 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:09:24.764891  475692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:24.770623  475692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:24.793920  475692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:25.006048  475692 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:09:25.006226  475692 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:09:25.102723  475692 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:09:25.122264  475692 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:09:25.827306  475692 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:09:25.827374  475692 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:09:25.827456  475692 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1017 20:09:26.182872  475692 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.080104475s)
	I1017 20:09:26.182951  475692 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.060662098s)
	I1017 20:09:26.183168  475692 api_server.go:72] duration metric: took 1.511685917s to wait for apiserver process to appear ...
	I1017 20:09:26.183177  475692 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:09:26.183204  475692 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1017 20:09:26.195881  475692 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1017 20:09:26.199748  475692 api_server.go:141] control plane version: v1.34.1
	I1017 20:09:26.199819  475692 api_server.go:131] duration metric: took 16.634766ms to wait for apiserver health ...
	I1017 20:09:26.199842  475692 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:09:26.205543  475692 system_pods.go:59] 8 kube-system pods found
	I1017 20:09:26.205696  475692 system_pods.go:61] "coredns-66bc5c9577-6pm4f" [6b397048-b97c-490f-9af0-a896e0f0e9eb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:09:26.205733  475692 system_pods.go:61] "etcd-newest-cni-718789" [a1dfd64a-5104-4a5f-b417-07e968b5227b] Running
	I1017 20:09:26.205761  475692 system_pods.go:61] "kindnet-lxdzb" [5f8a65f1-734c-4cc7-be69-7554cd4a7f07] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 20:09:26.205798  475692 system_pods.go:61] "kube-apiserver-newest-cni-718789" [aaa9a2d6-e322-4025-9c2c-3da21286ba0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:09:26.205836  475692 system_pods.go:61] "kube-controller-manager-newest-cni-718789" [804c53b6-55ab-459c-ab0b-4e8ec1dc8147] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:09:26.205862  475692 system_pods.go:61] "kube-proxy-s7gjc" [a08b3286-dc61-4ffc-8654-7be35ce377c6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:09:26.205886  475692 system_pods.go:61] "kube-scheduler-newest-cni-718789" [1103386a-5132-4c74-a47c-f31ad50a8447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:09:26.205914  475692 system_pods.go:61] "storage-provisioner" [0da306ef-227b-4f5c-a44c-c7cab4716c98] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:09:26.205947  475692 system_pods.go:74] duration metric: took 6.085038ms to wait for pod list to return data ...
	I1017 20:09:26.205975  475692 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:09:26.209118  475692 default_sa.go:45] found service account: "default"
	I1017 20:09:26.209199  475692 default_sa.go:55] duration metric: took 3.195442ms for default service account to be created ...
	I1017 20:09:26.209225  475692 kubeadm.go:586] duration metric: took 1.537741367s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:09:26.209264  475692 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:09:26.209664  475692 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 20:09:26.212639  475692 addons.go:514] duration metric: took 1.540788866s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 20:09:26.212714  475692 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:09:26.212734  475692 node_conditions.go:123] node cpu capacity is 2
	I1017 20:09:26.212746  475692 node_conditions.go:105] duration metric: took 3.454766ms to run NodePressure ...
	I1017 20:09:26.212757  475692 start.go:241] waiting for startup goroutines ...
	I1017 20:09:26.332090  475692 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-718789" context rescaled to 1 replicas
	I1017 20:09:26.332135  475692 start.go:246] waiting for cluster config update ...
	I1017 20:09:26.332148  475692 start.go:255] writing updated cluster config ...
	I1017 20:09:26.332423  475692 ssh_runner.go:195] Run: rm -f paused
	I1017 20:09:26.396644  475692 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:09:26.399904  475692 out.go:179] * Done! kubectl is now configured to use "newest-cni-718789" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:09:25 newest-cni-718789 crio[840]: time="2025-10-17T20:09:25.945225351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:25 newest-cni-718789 crio[840]: time="2025-10-17T20:09:25.951871628Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=404348f4-c003-4c92-9ea9-7292109f8256 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:25 newest-cni-718789 crio[840]: time="2025-10-17T20:09:25.968370168Z" level=info msg="Ran pod sandbox 016f81ef7d892faaa6ace22d610276ba262a5e30a6bc180e8ab6edb285b8c494 with infra container: kube-system/kindnet-lxdzb/POD" id=404348f4-c003-4c92-9ea9-7292109f8256 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:25 newest-cni-718789 crio[840]: time="2025-10-17T20:09:25.970143864Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-s7gjc/POD" id=21531a62-dc00-409f-ab39-8bb95a3627a1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:25 newest-cni-718789 crio[840]: time="2025-10-17T20:09:25.9702251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:25 newest-cni-718789 crio[840]: time="2025-10-17T20:09:25.981808483Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=21531a62-dc00-409f-ab39-8bb95a3627a1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:25 newest-cni-718789 crio[840]: time="2025-10-17T20:09:25.985526881Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8b6ff06b-1194-48ff-8103-370ded9bfa06 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.018865399Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5aca6d2b-9ae6-4504-a912-e00db0d45cf5 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.030760667Z" level=info msg="Ran pod sandbox 5bd70d44dfd53d7842ea593dd464c9d9bc6e25d691be49c89a2b1c007b249f11 with infra container: kube-system/kube-proxy-s7gjc/POD" id=21531a62-dc00-409f-ab39-8bb95a3627a1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.032667241Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=75ecf896-12dc-4256-83c4-754b3c0addf0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.033519492Z" level=info msg="Creating container: kube-system/kindnet-lxdzb/kindnet-cni" id=8b80fc23-44b4-4452-bb96-66a21fa9bb6d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.03526059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.035788781Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d9bb226f-a8e0-435d-91a7-21098d843660 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.050829743Z" level=info msg="Creating container: kube-system/kube-proxy-s7gjc/kube-proxy" id=52c87903-14e4-4abb-abd5-ff18a2fc5bc6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.052094809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.05213996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.056576115Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.067388048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.067936029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.138329117Z" level=info msg="Created container 5b3a4c33fdc9a662f6b4c43fe20efbba9823b37736538e07eab1f4a7988046e8: kube-system/kindnet-lxdzb/kindnet-cni" id=8b80fc23-44b4-4452-bb96-66a21fa9bb6d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.147150252Z" level=info msg="Starting container: 5b3a4c33fdc9a662f6b4c43fe20efbba9823b37736538e07eab1f4a7988046e8" id=d49fffcb-57d9-4ea9-af00-786025a507ac name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.157367231Z" level=info msg="Started container" PID=1492 containerID=5b3a4c33fdc9a662f6b4c43fe20efbba9823b37736538e07eab1f4a7988046e8 description=kube-system/kindnet-lxdzb/kindnet-cni id=d49fffcb-57d9-4ea9-af00-786025a507ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=016f81ef7d892faaa6ace22d610276ba262a5e30a6bc180e8ab6edb285b8c494
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.233268912Z" level=info msg="Created container ada69415f845d5e24e394427420839ad1bcfe85edbfd83e244e90e2eb27169b7: kube-system/kube-proxy-s7gjc/kube-proxy" id=52c87903-14e4-4abb-abd5-ff18a2fc5bc6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.23408836Z" level=info msg="Starting container: ada69415f845d5e24e394427420839ad1bcfe85edbfd83e244e90e2eb27169b7" id=731dea87-c21e-49ac-b1f4-66d4064d422e name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:09:26 newest-cni-718789 crio[840]: time="2025-10-17T20:09:26.236794519Z" level=info msg="Started container" PID=1495 containerID=ada69415f845d5e24e394427420839ad1bcfe85edbfd83e244e90e2eb27169b7 description=kube-system/kube-proxy-s7gjc/kube-proxy id=731dea87-c21e-49ac-b1f4-66d4064d422e name=/runtime.v1.RuntimeService/StartContainer sandboxID=5bd70d44dfd53d7842ea593dd464c9d9bc6e25d691be49c89a2b1c007b249f11
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ada69415f845d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   5bd70d44dfd53       kube-proxy-s7gjc                            kube-system
	5b3a4c33fdc9a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   016f81ef7d892       kindnet-lxdzb                               kube-system
	2914f4fc974ff       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   0eb49dd0cc6cc       kube-apiserver-newest-cni-718789            kube-system
	6268becbe321f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   5c1840cf79489       kube-scheduler-newest-cni-718789            kube-system
	575f261200304       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   80fe788e8dd25       kube-controller-manager-newest-cni-718789   kube-system
	0215b9ef66b92       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   bcdf72b40a003       etcd-newest-cni-718789                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-718789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-718789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=newest-cni-718789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_09_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:09:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-718789
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:09:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:09:20 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:09:20 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:09:20 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 20:09:20 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-718789
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6401c5a6-7a14-4968-8d2b-14b1d23b2a13
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-718789                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7s
	  kube-system                 kindnet-lxdzb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-718789             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-718789    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-s7gjc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-718789             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)  kubelet          Node newest-cni-718789 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 15s)  kubelet          Node newest-cni-718789 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 15s)  kubelet          Node newest-cni-718789 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-718789 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-718789 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7s                 kubelet          Node newest-cni-718789 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-718789 event: Registered Node newest-cni-718789 in Controller
	
	
	==> dmesg <==
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	[ +30.002292] overlayfs: idmapped layers are currently not supported
	[Oct17 20:08] overlayfs: idmapped layers are currently not supported
	[Oct17 20:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0215b9ef66b92ab22fb4025680083a465ee43f93b1f99085a42f2bd53069afa5] <==
	{"level":"warn","ts":"2025-10-17T20:09:15.941824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:15.950883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:15.970009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:15.986881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.015895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.037642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.057192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.079819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.120728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.172696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.188338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.201855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.221269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.235778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.252274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.271352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.287474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.302815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.318477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.334008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.354752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.374721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.411012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.428763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:16.486034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47866","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:09:28 up  2:51,  0 user,  load average: 3.71, 4.48, 3.39
	Linux newest-cni-718789 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b3a4c33fdc9a662f6b4c43fe20efbba9823b37736538e07eab1f4a7988046e8] <==
	I1017 20:09:26.226262       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:09:26.228427       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 20:09:26.228604       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:09:26.228618       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:09:26.228632       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:09:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:09:26.415906       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:09:26.415978       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:09:26.416018       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:09:26.416993       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [2914f4fc974ff9c2c03aafcd0d461138dfa9694c7db9d30cdc216c50932dea68] <==
	I1017 20:09:17.354566       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:09:17.354794       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1017 20:09:17.377884       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1017 20:09:17.379277       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:09:17.382953       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:09:17.385845       1 controller.go:667] quota admission added evaluator for: namespaces
	E1017 20:09:17.398581       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1017 20:09:17.460335       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:09:18.055408       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:09:18.062483       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:09:18.062510       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:09:18.753479       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:09:18.801712       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:09:18.885256       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:09:18.892109       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1017 20:09:18.893204       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:09:18.897764       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:09:19.198051       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:09:19.956216       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:09:19.971475       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:09:19.988400       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:09:24.603451       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:09:24.910972       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 20:09:24.996761       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:09:25.030281       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [575f261200304f375748de15449864ca54c6353a48cdc1971c444262be19b843] <==
	I1017 20:09:24.242247       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:09:24.242264       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:09:24.242368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:09:24.242386       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:09:24.242394       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:09:24.242285       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 20:09:24.242623       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:09:24.242295       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:09:24.242902       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:09:24.242954       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:09:24.243689       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:09:24.244425       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:09:24.248184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:09:24.249463       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:09:24.249527       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:09:24.249554       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:09:24.249610       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:09:24.249645       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:09:24.249654       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:09:24.249661       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:09:24.252705       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:09:24.253142       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 20:09:24.259830       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-718789" podCIDRs=["10.42.0.0/24"]
	I1017 20:09:24.262116       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:09:24.262242       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	
	
	==> kube-proxy [ada69415f845d5e24e394427420839ad1bcfe85edbfd83e244e90e2eb27169b7] <==
	I1017 20:09:26.284400       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:09:26.370110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:09:26.478376       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:09:26.478416       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 20:09:26.478482       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:09:26.523660       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:09:26.524153       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:09:26.548190       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:09:26.552081       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:09:26.552158       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:09:26.556805       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:09:26.556886       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:09:26.557246       1 config.go:200] "Starting service config controller"
	I1017 20:09:26.557299       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:09:26.557650       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:09:26.558877       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:09:26.559437       1 config.go:309] "Starting node config controller"
	I1017 20:09:26.559455       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:09:26.559462       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:09:26.658951       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:09:26.659010       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:09:26.659253       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6268becbe321f3689fa1a157bb52be59e33b5385b8969c8fbcdd1491d2ee117c] <==
	E1017 20:09:17.341191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:09:17.341266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:09:17.341348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:09:17.341420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:09:17.341494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:09:17.341561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:09:17.341650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:09:17.341721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:09:17.341848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:09:17.341948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:09:17.342023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:09:17.342100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:09:17.342207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:09:18.226250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:09:18.258070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:09:18.309678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:09:18.329457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:09:18.336377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:09:18.377193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:09:18.444632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:09:18.448037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:09:18.448037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:09:18.463002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:09:18.550908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1017 20:09:21.615433       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:09:21 newest-cni-718789 kubelet[1311]: E1017 20:09:21.011751    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-718789\" already exists" pod="kube-system/kube-apiserver-newest-cni-718789"
	Oct 17 20:09:21 newest-cni-718789 kubelet[1311]: I1017 20:09:21.142432    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-718789" podStartSLOduration=1.142412781 podStartE2EDuration="1.142412781s" podCreationTimestamp="2025-10-17 20:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:09:21.091263012 +0000 UTC m=+1.330638165" watchObservedRunningTime="2025-10-17 20:09:21.142412781 +0000 UTC m=+1.381787934"
	Oct 17 20:09:21 newest-cni-718789 kubelet[1311]: I1017 20:09:21.158546    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-718789" podStartSLOduration=1.158527085 podStartE2EDuration="1.158527085s" podCreationTimestamp="2025-10-17 20:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:09:21.143387737 +0000 UTC m=+1.382762890" watchObservedRunningTime="2025-10-17 20:09:21.158527085 +0000 UTC m=+1.397902238"
	Oct 17 20:09:21 newest-cni-718789 kubelet[1311]: I1017 20:09:21.176642    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-718789" podStartSLOduration=2.176622275 podStartE2EDuration="2.176622275s" podCreationTimestamp="2025-10-17 20:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:09:21.160455008 +0000 UTC m=+1.399830186" watchObservedRunningTime="2025-10-17 20:09:21.176622275 +0000 UTC m=+1.415997420"
	Oct 17 20:09:21 newest-cni-718789 kubelet[1311]: I1017 20:09:21.176771    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-718789" podStartSLOduration=1.17675663 podStartE2EDuration="1.17675663s" podCreationTimestamp="2025-10-17 20:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:09:21.174160984 +0000 UTC m=+1.413536137" watchObservedRunningTime="2025-10-17 20:09:21.17675663 +0000 UTC m=+1.416131792"
	Oct 17 20:09:24 newest-cni-718789 kubelet[1311]: I1017 20:09:24.313554    1311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 20:09:24 newest-cni-718789 kubelet[1311]: I1017 20:09:24.314454    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: I1017 20:09:25.137551    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-cni-cfg\") pod \"kindnet-lxdzb\" (UID: \"5f8a65f1-734c-4cc7-be69-7554cd4a7f07\") " pod="kube-system/kindnet-lxdzb"
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: I1017 20:09:25.137686    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-xtables-lock\") pod \"kindnet-lxdzb\" (UID: \"5f8a65f1-734c-4cc7-be69-7554cd4a7f07\") " pod="kube-system/kindnet-lxdzb"
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: I1017 20:09:25.137706    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-lib-modules\") pod \"kindnet-lxdzb\" (UID: \"5f8a65f1-734c-4cc7-be69-7554cd4a7f07\") " pod="kube-system/kindnet-lxdzb"
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: I1017 20:09:25.137847    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnz9v\" (UniqueName: \"kubernetes.io/projected/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-kube-api-access-vnz9v\") pod \"kindnet-lxdzb\" (UID: \"5f8a65f1-734c-4cc7-be69-7554cd4a7f07\") " pod="kube-system/kindnet-lxdzb"
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: I1017 20:09:25.137874    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a08b3286-dc61-4ffc-8654-7be35ce377c6-kube-proxy\") pod \"kube-proxy-s7gjc\" (UID: \"a08b3286-dc61-4ffc-8654-7be35ce377c6\") " pod="kube-system/kube-proxy-s7gjc"
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: I1017 20:09:25.137993    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a08b3286-dc61-4ffc-8654-7be35ce377c6-xtables-lock\") pod \"kube-proxy-s7gjc\" (UID: \"a08b3286-dc61-4ffc-8654-7be35ce377c6\") " pod="kube-system/kube-proxy-s7gjc"
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: I1017 20:09:25.138020    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08b3286-dc61-4ffc-8654-7be35ce377c6-lib-modules\") pod \"kube-proxy-s7gjc\" (UID: \"a08b3286-dc61-4ffc-8654-7be35ce377c6\") " pod="kube-system/kube-proxy-s7gjc"
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: I1017 20:09:25.138067    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8tv\" (UniqueName: \"kubernetes.io/projected/a08b3286-dc61-4ffc-8654-7be35ce377c6-kube-api-access-zm8tv\") pod \"kube-proxy-s7gjc\" (UID: \"a08b3286-dc61-4ffc-8654-7be35ce377c6\") " pod="kube-system/kube-proxy-s7gjc"
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: E1017 20:09:25.294185    1311 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: E1017 20:09:25.294232    1311 projected.go:196] Error preparing data for projected volume kube-api-access-zm8tv for pod kube-system/kube-proxy-s7gjc: configmap "kube-root-ca.crt" not found
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: E1017 20:09:25.294331    1311 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a08b3286-dc61-4ffc-8654-7be35ce377c6-kube-api-access-zm8tv podName:a08b3286-dc61-4ffc-8654-7be35ce377c6 nodeName:}" failed. No retries permitted until 2025-10-17 20:09:25.794301848 +0000 UTC m=+6.033676993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zm8tv" (UniqueName: "kubernetes.io/projected/a08b3286-dc61-4ffc-8654-7be35ce377c6-kube-api-access-zm8tv") pod "kube-proxy-s7gjc" (UID: "a08b3286-dc61-4ffc-8654-7be35ce377c6") : configmap "kube-root-ca.crt" not found
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: E1017 20:09:25.303370    1311 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: E1017 20:09:25.303404    1311 projected.go:196] Error preparing data for projected volume kube-api-access-vnz9v for pod kube-system/kindnet-lxdzb: configmap "kube-root-ca.crt" not found
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: E1017 20:09:25.303475    1311 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-kube-api-access-vnz9v podName:5f8a65f1-734c-4cc7-be69-7554cd4a7f07 nodeName:}" failed. No retries permitted until 2025-10-17 20:09:25.803454535 +0000 UTC m=+6.042829688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vnz9v" (UniqueName: "kubernetes.io/projected/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-kube-api-access-vnz9v") pod "kindnet-lxdzb" (UID: "5f8a65f1-734c-4cc7-be69-7554cd4a7f07") : configmap "kube-root-ca.crt" not found
	Oct 17 20:09:25 newest-cni-718789 kubelet[1311]: I1017 20:09:25.872229    1311 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:09:26 newest-cni-718789 kubelet[1311]: W1017 20:09:26.012988    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/crio-5bd70d44dfd53d7842ea593dd464c9d9bc6e25d691be49c89a2b1c007b249f11 WatchSource:0}: Error finding container 5bd70d44dfd53d7842ea593dd464c9d9bc6e25d691be49c89a2b1c007b249f11: Status 404 returned error can't find the container with id 5bd70d44dfd53d7842ea593dd464c9d9bc6e25d691be49c89a2b1c007b249f11
	Oct 17 20:09:27 newest-cni-718789 kubelet[1311]: I1017 20:09:27.149613    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lxdzb" podStartSLOduration=3.149593085 podStartE2EDuration="3.149593085s" podCreationTimestamp="2025-10-17 20:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:09:27.084879261 +0000 UTC m=+7.324254406" watchObservedRunningTime="2025-10-17 20:09:27.149593085 +0000 UTC m=+7.388968247"
	Oct 17 20:09:27 newest-cni-718789 kubelet[1311]: I1017 20:09:27.703246    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s7gjc" podStartSLOduration=3.70322548 podStartE2EDuration="3.70322548s" podCreationTimestamp="2025-10-17 20:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:09:27.150089581 +0000 UTC m=+7.389464734" watchObservedRunningTime="2025-10-17 20:09:27.70322548 +0000 UTC m=+7.942600633"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718789 -n newest-cni-718789
E1017 20:09:28.755399  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-718789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6pm4f storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-718789 describe pod coredns-66bc5c9577-6pm4f storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-718789 describe pod coredns-66bc5c9577-6pm4f storage-provisioner: exit status 1 (101.536662ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6pm4f" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-718789 describe pod coredns-66bc5c9577-6pm4f storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-740780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-740780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (373.733455ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:09:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-740780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-740780 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-740780 describe deploy/metrics-server -n kube-system: exit status 1 (129.948487ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-740780 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-740780
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-740780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395",
	        "Created": "2025-10-17T20:08:03.310435059Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471940,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:08:03.530652313Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/hostname",
	        "HostsPath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/hosts",
	        "LogPath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395-json.log",
	        "Name": "/default-k8s-diff-port-740780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-740780:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-740780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395",
	                "LowerDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860/merged",
	                "UpperDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860/diff",
	                "WorkDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-740780",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-740780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-740780",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-740780",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-740780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "71c71313129fde0b45620b23b0aec3dbac6c22c7b3c21b8fb34508e0ef22003f",
	            "SandboxKey": "/var/run/docker/netns/71c71313129f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-740780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:b4:88:d9:42:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b07c93b74eadee92a26c052eb44e638916a69f6583542a7473d7302a377567bf",
	                    "EndpointID": "2310040369cef3be99492528fd1a08406ff3102c452329054435c79ce639bd48",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-740780",
	                        "fedc9c1ddaae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-740780 logs -n 25
E1017 20:09:36.125527  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-740780 logs -n 25: (1.543417783s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:05 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-413711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │                     │
	│ stop    │ -p no-preload-413711 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p no-preload-413711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ stop    │ -p embed-certs-572724 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-572724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ image   │ no-preload-413711 image list --format=json                                                                                                                                                                                                    │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ pause   │ -p no-preload-413711 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-672422                                                                                                                                                                                                               │ disable-driver-mounts-672422 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:09 UTC │
	│ image   │ embed-certs-572724 image list --format=json                                                                                                                                                                                                   │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ pause   │ -p embed-certs-572724 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-718789 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-718789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-740780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:09:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:09:30.631437  478863 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:09:30.631613  478863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:30.631638  478863 out.go:374] Setting ErrFile to fd 2...
	I1017 20:09:30.631656  478863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:30.631931  478863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:09:30.632334  478863 out.go:368] Setting JSON to false
	I1017 20:09:30.633317  478863 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10322,"bootTime":1760721449,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:09:30.633412  478863 start.go:141] virtualization:  
	I1017 20:09:30.638408  478863 out.go:179] * [newest-cni-718789] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:09:30.641709  478863 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:09:30.641778  478863 notify.go:220] Checking for updates...
	I1017 20:09:30.648164  478863 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:09:30.651090  478863 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:09:30.654050  478863 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:09:30.656857  478863 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:09:30.659669  478863 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:09:30.662924  478863 config.go:182] Loaded profile config "newest-cni-718789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:30.663539  478863 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:09:30.700952  478863 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:09:30.701078  478863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:30.758401  478863 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:09:30.749366504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:30.758513  478863 docker.go:318] overlay module found
	I1017 20:09:30.761516  478863 out.go:179] * Using the docker driver based on existing profile
	I1017 20:09:30.764670  478863 start.go:305] selected driver: docker
	I1017 20:09:30.764700  478863 start.go:925] validating driver "docker" against &{Name:newest-cni-718789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:30.764800  478863 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:09:30.765533  478863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:30.815663  478863 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:09:30.806848352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:30.816044  478863 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:09:30.816077  478863 cni.go:84] Creating CNI manager for ""
	I1017 20:09:30.816134  478863 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:30.816172  478863 start.go:349] cluster config:
	{Name:newest-cni-718789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:30.819426  478863 out.go:179] * Starting "newest-cni-718789" primary control-plane node in "newest-cni-718789" cluster
	I1017 20:09:30.822050  478863 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:09:30.824998  478863 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:09:30.827710  478863 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:30.827742  478863 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:09:30.827821  478863 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:09:30.827831  478863 cache.go:58] Caching tarball of preloaded images
	I1017 20:09:30.827911  478863 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:09:30.827921  478863 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:09:30.828032  478863 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/config.json ...
	I1017 20:09:30.848045  478863 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:09:30.848071  478863 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:09:30.848090  478863 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:09:30.848118  478863 start.go:360] acquireMachinesLock for newest-cni-718789: {Name:mk25e52e47b384e7eeae83275e6a385fb152458a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:30.848195  478863 start.go:364] duration metric: took 47.72µs to acquireMachinesLock for "newest-cni-718789"
	I1017 20:09:30.848222  478863 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:09:30.848233  478863 fix.go:54] fixHost starting: 
	I1017 20:09:30.848506  478863 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:30.864939  478863 fix.go:112] recreateIfNeeded on newest-cni-718789: state=Stopped err=<nil>
	W1017 20:09:30.864969  478863 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:09:30.868257  478863 out.go:252] * Restarting existing docker container for "newest-cni-718789" ...
	I1017 20:09:30.868332  478863 cli_runner.go:164] Run: docker start newest-cni-718789
	I1017 20:09:31.150260  478863 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:31.173056  478863 kic.go:430] container "newest-cni-718789" state is running.
	I1017 20:09:31.173682  478863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718789
	I1017 20:09:31.196620  478863 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/config.json ...
	I1017 20:09:31.196851  478863 machine.go:93] provisionDockerMachine start ...
	I1017 20:09:31.196910  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:31.223306  478863 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:31.223635  478863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1017 20:09:31.223645  478863 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:09:31.224320  478863 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:09:34.372235  478863 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718789
	
	I1017 20:09:34.372268  478863 ubuntu.go:182] provisioning hostname "newest-cni-718789"
	I1017 20:09:34.372382  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:34.390273  478863 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:34.390580  478863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1017 20:09:34.390599  478863 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-718789 && echo "newest-cni-718789" | sudo tee /etc/hostname
	I1017 20:09:34.550578  478863 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718789
	
	I1017 20:09:34.550701  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:34.571993  478863 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:34.572307  478863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1017 20:09:34.572324  478863 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-718789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-718789/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-718789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:09:34.721739  478863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:09:34.721768  478863 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:09:34.721806  478863 ubuntu.go:190] setting up certificates
	I1017 20:09:34.721817  478863 provision.go:84] configureAuth start
	I1017 20:09:34.721893  478863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718789
	I1017 20:09:34.742816  478863 provision.go:143] copyHostCerts
	I1017 20:09:34.742891  478863 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:09:34.742912  478863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:09:34.742990  478863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:09:34.743130  478863 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:09:34.743217  478863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:09:34.743268  478863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:09:34.743346  478863 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:09:34.743357  478863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:09:34.743385  478863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:09:34.743448  478863 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.newest-cni-718789 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-718789]
	I1017 20:09:35.474498  478863 provision.go:177] copyRemoteCerts
	I1017 20:09:35.474586  478863 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:09:35.474631  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:35.494515  478863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:35.605782  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:09:35.627787  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	
	
	==> CRI-O <==
	Oct 17 20:09:22 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:22.352770199Z" level=info msg="Created container 15a41537ac3985ae7d24dcb906c65321ee1e507a6a58386d40a7d6e952cd76e6: kube-system/coredns-66bc5c9577-6mknt/coredns" id=f5b1a37e-ad19-455a-a17f-f86b22680bbf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:22 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:22.3545904Z" level=info msg="Starting container: 15a41537ac3985ae7d24dcb906c65321ee1e507a6a58386d40a7d6e952cd76e6" id=3635cf9b-930b-4b2c-86fd-7b5b7b1d4c5f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:09:22 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:22.384271606Z" level=info msg="Started container" PID=1768 containerID=15a41537ac3985ae7d24dcb906c65321ee1e507a6a58386d40a7d6e952cd76e6 description=kube-system/coredns-66bc5c9577-6mknt/coredns id=3635cf9b-930b-4b2c-86fd-7b5b7b1d4c5f name=/runtime.v1.RuntimeService/StartContainer sandboxID=15747ee8b21a54c43a9fd644e3ccd54e20386054abdbe1392bfcf4b35e169923
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.073659648Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9f3df8a5-6b37-4a94-b5a1-6229294c5c19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.073741074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.082003332Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:15c9f6f35225c7c8ae1ade90f104f5e7f4d619425b81d6b4712a36d44e7f55d4 UID:a22cdfdc-f249-4c36-b136-1e956a4ac0f0 NetNS:/var/run/netns/7b874fa0-0960-4de7-aed4-67039bd05d76 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078d10}] Aliases:map[]}"
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.082059248Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.111654286Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:15c9f6f35225c7c8ae1ade90f104f5e7f4d619425b81d6b4712a36d44e7f55d4 UID:a22cdfdc-f249-4c36-b136-1e956a4ac0f0 NetNS:/var/run/netns/7b874fa0-0960-4de7-aed4-67039bd05d76 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078d10}] Aliases:map[]}"
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.11186233Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.125281544Z" level=info msg="Ran pod sandbox 15c9f6f35225c7c8ae1ade90f104f5e7f4d619425b81d6b4712a36d44e7f55d4 with infra container: default/busybox/POD" id=9f3df8a5-6b37-4a94-b5a1-6229294c5c19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.126732524Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e7344f69-d094-4236-8fca-938cb362aea5 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.126956067Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e7344f69-d094-4236-8fca-938cb362aea5 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.12706031Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e7344f69-d094-4236-8fca-938cb362aea5 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.128076627Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3a221043-9c17-4e43-b14e-85bee278c7fe name=/runtime.v1.ImageService/PullImage
	Oct 17 20:09:26 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:26.131651168Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 20:09:28 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:28.23777017Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=3a221043-9c17-4e43-b14e-85bee278c7fe name=/runtime.v1.ImageService/PullImage
	Oct 17 20:09:28 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:28.239497532Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=85a4750e-aa7d-4ece-8a08-96c8fb9e6f1b name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:28 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:28.244565785Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=01cf0ae9-3610-4bf0-9ddf-5658de5c3430 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:28 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:28.250131936Z" level=info msg="Creating container: default/busybox/busybox" id=083a8e5f-c772-4acb-931e-c9d5f812b17b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:28 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:28.251199779Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:28 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:28.261488264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:28 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:28.262225934Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:28 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:28.285260034Z" level=info msg="Created container a4cd490c044dbf7a2459e0ea8933d6f548caaa5f0608b302dbdb96fc2a3677dd: default/busybox/busybox" id=083a8e5f-c772-4acb-931e-c9d5f812b17b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:28 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:28.289305553Z" level=info msg="Starting container: a4cd490c044dbf7a2459e0ea8933d6f548caaa5f0608b302dbdb96fc2a3677dd" id=739877cc-8959-4b78-ba56-8e346bd96cb6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:09:28 default-k8s-diff-port-740780 crio[839]: time="2025-10-17T20:09:28.294793044Z" level=info msg="Started container" PID=1822 containerID=a4cd490c044dbf7a2459e0ea8933d6f548caaa5f0608b302dbdb96fc2a3677dd description=default/busybox/busybox id=739877cc-8959-4b78-ba56-8e346bd96cb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c9f6f35225c7c8ae1ade90f104f5e7f4d619425b81d6b4712a36d44e7f55d4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	a4cd490c044db       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   15c9f6f35225c       busybox                                                default
	15a41537ac398       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   15747ee8b21a5       coredns-66bc5c9577-6mknt                               kube-system
	8b2439e7026b6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   01ddcf7a88f88       storage-provisioner                                    kube-system
	c12d5025b3762       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   14fbf4dfdcc29       kindnet-fnx26                                          kube-system
	32e777bad16ce       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   87d969b57cc15       kube-proxy-8x772                                       kube-system
	de4f84c74b4f8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   5e46ecf3dba88       kube-scheduler-default-k8s-diff-port-740780            kube-system
	4357e9eb3f807       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   7412cf124a0c4       kube-apiserver-default-k8s-diff-port-740780            kube-system
	d8e803e1ae999       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   ce974b09caf67       kube-controller-manager-default-k8s-diff-port-740780   kube-system
	6995603750bc5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   ea2b8d2259226       etcd-default-k8s-diff-port-740780                      kube-system
	
	
	==> coredns [15a41537ac3985ae7d24dcb906c65321ee1e507a6a58386d40a7d6e952cd76e6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35632 - 53018 "HINFO IN 2389755923271212082.771853261583762670. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030224888s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-740780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-740780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=default-k8s-diff-port-740780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_08_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:08:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-740780
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:09:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:09:36 +0000   Fri, 17 Oct 2025 20:08:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:09:36 +0000   Fri, 17 Oct 2025 20:08:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:09:36 +0000   Fri, 17 Oct 2025 20:08:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:09:36 +0000   Fri, 17 Oct 2025 20:09:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-740780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                aeaa4ad6-0a8d-467b-bdc0-41bfb9026ea7
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-6mknt                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-740780                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-fnx26                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-740780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-740780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-8x772                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-740780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 55s   kube-proxy       
	  Normal   Starting                 62s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s   kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s   kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s   kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s   node-controller  Node default-k8s-diff-port-740780 event: Registered Node default-k8s-diff-port-740780 in Controller
	  Normal   NodeReady                15s   kubelet          Node default-k8s-diff-port-740780 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct17 19:46] overlayfs: idmapped layers are currently not supported
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	[ +30.002292] overlayfs: idmapped layers are currently not supported
	[Oct17 20:08] overlayfs: idmapped layers are currently not supported
	[Oct17 20:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6995603750bc58405f847c55ddfdb4cb5ee41e9b5ae11efc80f1e8ecd8846094] <==
	{"level":"warn","ts":"2025-10-17T20:08:30.778020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:30.857324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:30.897924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:30.945251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:30.963841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:30.983733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:30.997269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.019552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.054662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.076904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.092458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.106795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.128289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.142861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.159787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.198913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.215708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.260512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.275769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.294411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.327093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.353755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.371603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.390437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:08:31.500618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36730","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:09:37 up  2:52,  0 user,  load average: 3.41, 4.40, 3.37
	Linux default-k8s-diff-port-740780 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c12d5025b3762dc3e636c2c7fa11e12673511d8d62ba5ac84bffe577f6385a13] <==
	I1017 20:08:41.436928       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:08:41.437176       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:08:41.437305       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:08:41.437316       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:08:41.437329       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:08:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:08:41.634819       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:08:41.634843       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:08:41.634851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:08:41.635119       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:09:11.634688       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:09:11.634985       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 20:09:11.635171       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 20:09:11.636486       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1017 20:09:13.235870       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:09:13.235913       1 metrics.go:72] Registering metrics
	I1017 20:09:13.235984       1 controller.go:711] "Syncing nftables rules"
	I1017 20:09:21.641202       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:09:21.641241       1 main.go:301] handling current node
	I1017 20:09:31.636602       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:09:31.636636       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4357e9eb3f807a1d36d8a1f048cfdb1827be11c0c90c3f375babc79c590fb595] <==
	E1017 20:08:32.546566       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1017 20:08:32.593107       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:08:32.621420       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:08:32.622339       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 20:08:32.631358       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:08:32.631506       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:08:32.703147       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:08:33.202215       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 20:08:33.217051       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 20:08:33.217145       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:08:33.987289       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:08:34.052871       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:08:34.202663       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 20:08:34.223882       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1017 20:08:34.225532       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:08:34.235933       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:08:34.440999       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:08:34.991002       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:08:35.023483       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 20:08:35.047203       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:08:40.245548       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:08:40.394990       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 20:08:40.553827       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:08:40.559259       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1017 20:09:34.968114       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:56124: use of closed network connection
	
	
	==> kube-controller-manager [d8e803e1ae999ede34080e2fa7a24dbb94e4a389b6a8a9123d2c9c9ebbc9c4c6] <==
	I1017 20:08:39.502018       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:08:39.502041       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:08:39.502057       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:08:39.502063       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:08:39.504507       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 20:08:39.504707       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:08:39.504811       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-740780"
	I1017 20:08:39.504885       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 20:08:39.513498       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:08:39.514061       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-740780" podCIDRs=["10.244.0.0/24"]
	I1017 20:08:39.521603       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:08:39.530250       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:08:39.537875       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:08:39.537900       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:08:39.537907       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:08:39.537968       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:08:39.539323       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:08:39.539525       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:08:39.539673       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:08:39.541375       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:08:39.541782       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:08:39.545754       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:08:39.552594       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:08:39.556028       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:09:24.510921       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [32e777bad16ce699d469ec4d22ce020c6024f304eb7b088090da6790af1fe6ef] <==
	I1017 20:08:41.541367       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:08:41.730293       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:08:41.830512       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:08:41.830550       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:08:41.830638       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:08:41.950410       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:08:41.950469       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:08:41.954706       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:08:41.959203       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:08:41.959227       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:08:41.960336       1 config.go:200] "Starting service config controller"
	I1017 20:08:41.960350       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:08:41.961572       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:08:41.961581       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:08:41.961607       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:08:41.961611       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:08:41.961988       1 config.go:309] "Starting node config controller"
	I1017 20:08:41.961994       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:08:41.961999       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:08:42.094720       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:08:42.094810       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:08:42.095138       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [de4f84c74b4f82d1170de991099412d3213491dacfa96494e3f203e111fcc1ea] <==
	E1017 20:08:32.476151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:08:32.476595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:08:32.479862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 20:08:32.480213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:08:32.480260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:08:32.480296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:08:32.480327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:08:32.480364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:08:32.480409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:08:32.480919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:08:32.481111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:08:32.481171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:08:33.291800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:08:33.311258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:08:33.312287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 20:08:33.384087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:08:33.385250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:08:33.390962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:08:33.426583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:08:33.532669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:08:33.541276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:08:33.595730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:08:33.626648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:08:33.664296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1017 20:08:35.967446       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:08:36 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:36.153349    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-740780" podStartSLOduration=1.153321022 podStartE2EDuration="1.153321022s" podCreationTimestamp="2025-10-17 20:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:08:36.134122609 +0000 UTC m=+1.291156519" watchObservedRunningTime="2025-10-17 20:08:36.153321022 +0000 UTC m=+1.310354933"
	Oct 17 20:08:39 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:39.557711    1329 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 20:08:39 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:39.558693    1329 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 20:08:40 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:40.628017    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/19f55ff7-64eb-4407-9168-aa18ddbe543c-kube-proxy\") pod \"kube-proxy-8x772\" (UID: \"19f55ff7-64eb-4407-9168-aa18ddbe543c\") " pod="kube-system/kube-proxy-8x772"
	Oct 17 20:08:40 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:40.628076    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19f55ff7-64eb-4407-9168-aa18ddbe543c-lib-modules\") pod \"kube-proxy-8x772\" (UID: \"19f55ff7-64eb-4407-9168-aa18ddbe543c\") " pod="kube-system/kube-proxy-8x772"
	Oct 17 20:08:40 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:40.628099    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e1d707-7d88-4317-ab9f-dd7698ee1cd1-lib-modules\") pod \"kindnet-fnx26\" (UID: \"16e1d707-7d88-4317-ab9f-dd7698ee1cd1\") " pod="kube-system/kindnet-fnx26"
	Oct 17 20:08:40 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:40.628119    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/16e1d707-7d88-4317-ab9f-dd7698ee1cd1-cni-cfg\") pod \"kindnet-fnx26\" (UID: \"16e1d707-7d88-4317-ab9f-dd7698ee1cd1\") " pod="kube-system/kindnet-fnx26"
	Oct 17 20:08:40 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:40.628151    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7gkv\" (UniqueName: \"kubernetes.io/projected/16e1d707-7d88-4317-ab9f-dd7698ee1cd1-kube-api-access-m7gkv\") pod \"kindnet-fnx26\" (UID: \"16e1d707-7d88-4317-ab9f-dd7698ee1cd1\") " pod="kube-system/kindnet-fnx26"
	Oct 17 20:08:40 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:40.628175    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19f55ff7-64eb-4407-9168-aa18ddbe543c-xtables-lock\") pod \"kube-proxy-8x772\" (UID: \"19f55ff7-64eb-4407-9168-aa18ddbe543c\") " pod="kube-system/kube-proxy-8x772"
	Oct 17 20:08:40 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:40.628192    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnjxx\" (UniqueName: \"kubernetes.io/projected/19f55ff7-64eb-4407-9168-aa18ddbe543c-kube-api-access-hnjxx\") pod \"kube-proxy-8x772\" (UID: \"19f55ff7-64eb-4407-9168-aa18ddbe543c\") " pod="kube-system/kube-proxy-8x772"
	Oct 17 20:08:40 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:40.628208    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e1d707-7d88-4317-ab9f-dd7698ee1cd1-xtables-lock\") pod \"kindnet-fnx26\" (UID: \"16e1d707-7d88-4317-ab9f-dd7698ee1cd1\") " pod="kube-system/kindnet-fnx26"
	Oct 17 20:08:40 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:40.784625    1329 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:08:41 default-k8s-diff-port-740780 kubelet[1329]: W1017 20:08:41.140035    1329 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/crio-87d969b57cc15b1882b788e0c04eb1bae7e774433062e5dfbab7dc6da0388f39 WatchSource:0}: Error finding container 87d969b57cc15b1882b788e0c04eb1bae7e774433062e5dfbab7dc6da0388f39: Status 404 returned error can't find the container with id 87d969b57cc15b1882b788e0c04eb1bae7e774433062e5dfbab7dc6da0388f39
	Oct 17 20:08:42 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:42.313228    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fnx26" podStartSLOduration=2.313189988 podStartE2EDuration="2.313189988s" podCreationTimestamp="2025-10-17 20:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:08:42.312985596 +0000 UTC m=+7.470019506" watchObservedRunningTime="2025-10-17 20:08:42.313189988 +0000 UTC m=+7.470223890"
	Oct 17 20:08:43 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:08:43.477717    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8x772" podStartSLOduration=3.477688053 podStartE2EDuration="3.477688053s" podCreationTimestamp="2025-10-17 20:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:08:42.364229558 +0000 UTC m=+7.521263477" watchObservedRunningTime="2025-10-17 20:08:43.477688053 +0000 UTC m=+8.634721955"
	Oct 17 20:09:21 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:09:21.877224    1329 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 20:09:21 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:09:21.969302    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw9mx\" (UniqueName: \"kubernetes.io/projected/15647d52-61fb-4af6-8d28-66da6ebd0923-kube-api-access-vw9mx\") pod \"coredns-66bc5c9577-6mknt\" (UID: \"15647d52-61fb-4af6-8d28-66da6ebd0923\") " pod="kube-system/coredns-66bc5c9577-6mknt"
	Oct 17 20:09:21 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:09:21.969607    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15647d52-61fb-4af6-8d28-66da6ebd0923-config-volume\") pod \"coredns-66bc5c9577-6mknt\" (UID: \"15647d52-61fb-4af6-8d28-66da6ebd0923\") " pod="kube-system/coredns-66bc5c9577-6mknt"
	Oct 17 20:09:21 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:09:21.969657    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqvhm\" (UniqueName: \"kubernetes.io/projected/f0266236-3025-407f-ae0f-c4e9e5ae8ff0-kube-api-access-vqvhm\") pod \"storage-provisioner\" (UID: \"f0266236-3025-407f-ae0f-c4e9e5ae8ff0\") " pod="kube-system/storage-provisioner"
	Oct 17 20:09:21 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:09:21.969685    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f0266236-3025-407f-ae0f-c4e9e5ae8ff0-tmp\") pod \"storage-provisioner\" (UID: \"f0266236-3025-407f-ae0f-c4e9e5ae8ff0\") " pod="kube-system/storage-provisioner"
	Oct 17 20:09:22 default-k8s-diff-port-740780 kubelet[1329]: W1017 20:09:22.260703    1329 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/crio-15747ee8b21a54c43a9fd644e3ccd54e20386054abdbe1392bfcf4b35e169923 WatchSource:0}: Error finding container 15747ee8b21a54c43a9fd644e3ccd54e20386054abdbe1392bfcf4b35e169923: Status 404 returned error can't find the container with id 15747ee8b21a54c43a9fd644e3ccd54e20386054abdbe1392bfcf4b35e169923
	Oct 17 20:09:23 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:09:23.349044    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.349027216 podStartE2EDuration="41.349027216s" podCreationTimestamp="2025-10-17 20:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:09:22.347555866 +0000 UTC m=+47.504589784" watchObservedRunningTime="2025-10-17 20:09:23.349027216 +0000 UTC m=+48.506061118"
	Oct 17 20:09:23 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:09:23.369715    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6mknt" podStartSLOduration=43.369685684 podStartE2EDuration="43.369685684s" podCreationTimestamp="2025-10-17 20:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:09:23.350373887 +0000 UTC m=+48.507407797" watchObservedRunningTime="2025-10-17 20:09:23.369685684 +0000 UTC m=+48.526719586"
	Oct 17 20:09:25 default-k8s-diff-port-740780 kubelet[1329]: I1017 20:09:25.801774    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvkx4\" (UniqueName: \"kubernetes.io/projected/a22cdfdc-f249-4c36-b136-1e956a4ac0f0-kube-api-access-tvkx4\") pod \"busybox\" (UID: \"a22cdfdc-f249-4c36-b136-1e956a4ac0f0\") " pod="default/busybox"
	Oct 17 20:09:26 default-k8s-diff-port-740780 kubelet[1329]: W1017 20:09:26.123497    1329 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/crio-15c9f6f35225c7c8ae1ade90f104f5e7f4d619425b81d6b4712a36d44e7f55d4 WatchSource:0}: Error finding container 15c9f6f35225c7c8ae1ade90f104f5e7f4d619425b81d6b4712a36d44e7f55d4: Status 404 returned error can't find the container with id 15c9f6f35225c7c8ae1ade90f104f5e7f4d619425b81d6b4712a36d44e7f55d4
	
	
	==> storage-provisioner [8b2439e7026b67c2990bf266152d4b3f318c50507dca6e12b97541d2eb38a6b1] <==
	I1017 20:09:22.323627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:09:22.371714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:09:22.375520       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:09:22.381245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:22.394416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:09:22.394761       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:09:22.399514       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-740780_b1d746ab-9af7-497b-8fc7-da59c93ce5e0!
	I1017 20:09:22.406658       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81077895-6eb3-4ab5-abce-e2589ce9b483", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-740780_b1d746ab-9af7-497b-8fc7-da59c93ce5e0 became leader
	W1017 20:09:22.410624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:22.421542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:09:22.499710       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-740780_b1d746ab-9af7-497b-8fc7-da59c93ce5e0!
	W1017 20:09:24.424640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:24.428825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:26.432713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:26.440744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:28.444357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:28.448959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:30.452710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:30.458015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:32.461236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:32.468308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:34.476966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:34.482024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:36.484673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:09:36.497590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-740780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-718789 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-718789 --alsologtostderr -v=1: exit status 80 (2.34218271s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-718789 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:09:47.487096  481272 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:09:47.487225  481272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:47.487236  481272 out.go:374] Setting ErrFile to fd 2...
	I1017 20:09:47.487242  481272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:47.487530  481272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:09:47.487799  481272 out.go:368] Setting JSON to false
	I1017 20:09:47.487830  481272 mustload.go:65] Loading cluster: newest-cni-718789
	I1017 20:09:47.488298  481272 config.go:182] Loaded profile config "newest-cni-718789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:47.488848  481272 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:47.508577  481272 host.go:66] Checking if "newest-cni-718789" exists ...
	I1017 20:09:47.508883  481272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:47.574178  481272 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-17 20:09:47.564286645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:47.574808  481272 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-718789 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:09:47.578286  481272 out.go:179] * Pausing node newest-cni-718789 ... 
	I1017 20:09:47.581122  481272 host.go:66] Checking if "newest-cni-718789" exists ...
	I1017 20:09:47.581448  481272 ssh_runner.go:195] Run: systemctl --version
	I1017 20:09:47.581494  481272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:47.602357  481272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:47.711796  481272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:09:47.725419  481272 pause.go:52] kubelet running: true
	I1017 20:09:47.725485  481272 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:09:47.939582  481272 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:09:47.939681  481272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:09:48.013085  481272 cri.go:89] found id: "6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd"
	I1017 20:09:48.013113  481272 cri.go:89] found id: "a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d"
	I1017 20:09:48.013119  481272 cri.go:89] found id: "fc8b1b886a8818d8867cb1f27b254636bf690f6338d52d794d2a5fe24e6afb17"
	I1017 20:09:48.013123  481272 cri.go:89] found id: "bf10220fe426e3e6e10f9b3b26eb7432ae81bc39b8d091cee13805fbf7585fb3"
	I1017 20:09:48.013126  481272 cri.go:89] found id: "6ae81ee5a964746ee11924e4851ada6bbdad70b4d25601b3cb321aa3c2eafb58"
	I1017 20:09:48.013130  481272 cri.go:89] found id: "44d8e518daaf7003deeb5318c8487caac4dc7e2dd9f5304c7652f42453d88c10"
	I1017 20:09:48.013133  481272 cri.go:89] found id: ""
	I1017 20:09:48.013189  481272 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:09:48.040449  481272 retry.go:31] will retry after 136.614522ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:09:48Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:09:48.177935  481272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:09:48.191143  481272 pause.go:52] kubelet running: false
	I1017 20:09:48.191231  481272 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:09:48.349576  481272 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:09:48.349699  481272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:09:48.424111  481272 cri.go:89] found id: "6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd"
	I1017 20:09:48.424190  481272 cri.go:89] found id: "a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d"
	I1017 20:09:48.424209  481272 cri.go:89] found id: "fc8b1b886a8818d8867cb1f27b254636bf690f6338d52d794d2a5fe24e6afb17"
	I1017 20:09:48.424229  481272 cri.go:89] found id: "bf10220fe426e3e6e10f9b3b26eb7432ae81bc39b8d091cee13805fbf7585fb3"
	I1017 20:09:48.424263  481272 cri.go:89] found id: "6ae81ee5a964746ee11924e4851ada6bbdad70b4d25601b3cb321aa3c2eafb58"
	I1017 20:09:48.424288  481272 cri.go:89] found id: "44d8e518daaf7003deeb5318c8487caac4dc7e2dd9f5304c7652f42453d88c10"
	I1017 20:09:48.424309  481272 cri.go:89] found id: ""
	I1017 20:09:48.424399  481272 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:09:48.435418  481272 retry.go:31] will retry after 450.954692ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:09:48Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:09:48.886772  481272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:09:48.901392  481272 pause.go:52] kubelet running: false
	I1017 20:09:48.901456  481272 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:09:49.046905  481272 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:09:49.046977  481272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:09:49.123382  481272 cri.go:89] found id: "6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd"
	I1017 20:09:49.123402  481272 cri.go:89] found id: "a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d"
	I1017 20:09:49.123408  481272 cri.go:89] found id: "fc8b1b886a8818d8867cb1f27b254636bf690f6338d52d794d2a5fe24e6afb17"
	I1017 20:09:49.123411  481272 cri.go:89] found id: "bf10220fe426e3e6e10f9b3b26eb7432ae81bc39b8d091cee13805fbf7585fb3"
	I1017 20:09:49.123415  481272 cri.go:89] found id: "6ae81ee5a964746ee11924e4851ada6bbdad70b4d25601b3cb321aa3c2eafb58"
	I1017 20:09:49.123419  481272 cri.go:89] found id: "44d8e518daaf7003deeb5318c8487caac4dc7e2dd9f5304c7652f42453d88c10"
	I1017 20:09:49.123422  481272 cri.go:89] found id: ""
	I1017 20:09:49.123470  481272 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:09:49.135372  481272 retry.go:31] will retry after 386.793144ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:09:49Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:09:49.523011  481272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:09:49.535420  481272 pause.go:52] kubelet running: false
	I1017 20:09:49.535491  481272 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:09:49.668810  481272 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:09:49.668884  481272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:09:49.739153  481272 cri.go:89] found id: "6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd"
	I1017 20:09:49.739177  481272 cri.go:89] found id: "a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d"
	I1017 20:09:49.739182  481272 cri.go:89] found id: "fc8b1b886a8818d8867cb1f27b254636bf690f6338d52d794d2a5fe24e6afb17"
	I1017 20:09:49.739186  481272 cri.go:89] found id: "bf10220fe426e3e6e10f9b3b26eb7432ae81bc39b8d091cee13805fbf7585fb3"
	I1017 20:09:49.739190  481272 cri.go:89] found id: "6ae81ee5a964746ee11924e4851ada6bbdad70b4d25601b3cb321aa3c2eafb58"
	I1017 20:09:49.739193  481272 cri.go:89] found id: "44d8e518daaf7003deeb5318c8487caac4dc7e2dd9f5304c7652f42453d88c10"
	I1017 20:09:49.739196  481272 cri.go:89] found id: ""
	I1017 20:09:49.739251  481272 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:09:49.753212  481272 out.go:203] 
	W1017 20:09:49.756115  481272 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:09:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:09:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:09:49.756131  481272 out.go:285] * 
	* 
	W1017 20:09:49.762751  481272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:09:49.765686  481272 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-718789 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-718789
helpers_test.go:243: (dbg) docker inspect newest-cni-718789:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498",
	        "Created": "2025-10-17T20:08:54.624965091Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478992,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:09:30.904413019Z",
	            "FinishedAt": "2025-10-17T20:09:29.951688775Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/hostname",
	        "HostsPath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/hosts",
	        "LogPath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498-json.log",
	        "Name": "/newest-cni-718789",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-718789:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-718789",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498",
	                "LowerDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-718789",
	                "Source": "/var/lib/docker/volumes/newest-cni-718789/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-718789",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-718789",
	                "name.minikube.sigs.k8s.io": "newest-cni-718789",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e67feeda0c8cf126a71f6323a90ae68a221dd6c145b81a9e25acc874a184997f",
	            "SandboxKey": "/var/run/docker/netns/e67feeda0c8c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-718789": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:ea:d4:9b:0c:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8cd2eedf95aa208e706bcc7b2b128ff9ad782ac6990bd5bc75c6c1730d2dbe6",
	                    "EndpointID": "cca590f0987729eab7b9d66dbb1971148367276652b9c6862ce07628499c0c06",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-718789",
	                        "637fa246d690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718789 -n newest-cni-718789
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718789 -n newest-cni-718789: exit status 2 (387.625253ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-718789 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-718789 logs -n 25: (1.394681412s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-413711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ stop    │ -p embed-certs-572724 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-572724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ image   │ no-preload-413711 image list --format=json                                                                                                                                                                                                    │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ pause   │ -p no-preload-413711 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-672422                                                                                                                                                                                                               │ disable-driver-mounts-672422 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:09 UTC │
	│ image   │ embed-certs-572724 image list --format=json                                                                                                                                                                                                   │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ pause   │ -p embed-certs-572724 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-718789 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-718789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-740780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-740780 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ image   │ newest-cni-718789 image list --format=json                                                                                                                                                                                                    │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ pause   │ -p newest-cni-718789 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:09:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:09:30.631437  478863 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:09:30.631613  478863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:30.631638  478863 out.go:374] Setting ErrFile to fd 2...
	I1017 20:09:30.631656  478863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:30.631931  478863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:09:30.632334  478863 out.go:368] Setting JSON to false
	I1017 20:09:30.633317  478863 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10322,"bootTime":1760721449,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:09:30.633412  478863 start.go:141] virtualization:  
	I1017 20:09:30.638408  478863 out.go:179] * [newest-cni-718789] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:09:30.641709  478863 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:09:30.641778  478863 notify.go:220] Checking for updates...
	I1017 20:09:30.648164  478863 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:09:30.651090  478863 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:09:30.654050  478863 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:09:30.656857  478863 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:09:30.659669  478863 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:09:30.662924  478863 config.go:182] Loaded profile config "newest-cni-718789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:30.663539  478863 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:09:30.700952  478863 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:09:30.701078  478863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:30.758401  478863 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:09:30.749366504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:30.758513  478863 docker.go:318] overlay module found
	I1017 20:09:30.761516  478863 out.go:179] * Using the docker driver based on existing profile
	I1017 20:09:30.764670  478863 start.go:305] selected driver: docker
	I1017 20:09:30.764700  478863 start.go:925] validating driver "docker" against &{Name:newest-cni-718789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:30.764800  478863 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:09:30.765533  478863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:30.815663  478863 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:09:30.806848352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:30.816044  478863 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:09:30.816077  478863 cni.go:84] Creating CNI manager for ""
	I1017 20:09:30.816134  478863 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:30.816172  478863 start.go:349] cluster config:
	{Name:newest-cni-718789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:30.819426  478863 out.go:179] * Starting "newest-cni-718789" primary control-plane node in "newest-cni-718789" cluster
	I1017 20:09:30.822050  478863 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:09:30.824998  478863 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:09:30.827710  478863 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:30.827742  478863 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:09:30.827821  478863 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:09:30.827831  478863 cache.go:58] Caching tarball of preloaded images
	I1017 20:09:30.827911  478863 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:09:30.827921  478863 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:09:30.828032  478863 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/config.json ...
	I1017 20:09:30.848045  478863 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:09:30.848071  478863 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:09:30.848090  478863 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:09:30.848118  478863 start.go:360] acquireMachinesLock for newest-cni-718789: {Name:mk25e52e47b384e7eeae83275e6a385fb152458a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:30.848195  478863 start.go:364] duration metric: took 47.72µs to acquireMachinesLock for "newest-cni-718789"
	I1017 20:09:30.848222  478863 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:09:30.848233  478863 fix.go:54] fixHost starting: 
	I1017 20:09:30.848506  478863 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:30.864939  478863 fix.go:112] recreateIfNeeded on newest-cni-718789: state=Stopped err=<nil>
	W1017 20:09:30.864969  478863 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:09:30.868257  478863 out.go:252] * Restarting existing docker container for "newest-cni-718789" ...
	I1017 20:09:30.868332  478863 cli_runner.go:164] Run: docker start newest-cni-718789
	I1017 20:09:31.150260  478863 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:31.173056  478863 kic.go:430] container "newest-cni-718789" state is running.
	I1017 20:09:31.173682  478863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718789
	I1017 20:09:31.196620  478863 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/config.json ...
	I1017 20:09:31.196851  478863 machine.go:93] provisionDockerMachine start ...
	I1017 20:09:31.196910  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:31.223306  478863 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:31.223635  478863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1017 20:09:31.223645  478863 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:09:31.224320  478863 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:09:34.372235  478863 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718789
	
	I1017 20:09:34.372268  478863 ubuntu.go:182] provisioning hostname "newest-cni-718789"
	I1017 20:09:34.372382  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:34.390273  478863 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:34.390580  478863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1017 20:09:34.390599  478863 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-718789 && echo "newest-cni-718789" | sudo tee /etc/hostname
	I1017 20:09:34.550578  478863 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-718789
	
	I1017 20:09:34.550701  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:34.571993  478863 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:34.572307  478863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1017 20:09:34.572324  478863 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-718789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-718789/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-718789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:09:34.721739  478863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:09:34.721768  478863 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:09:34.721806  478863 ubuntu.go:190] setting up certificates
	I1017 20:09:34.721817  478863 provision.go:84] configureAuth start
	I1017 20:09:34.721893  478863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718789
	I1017 20:09:34.742816  478863 provision.go:143] copyHostCerts
	I1017 20:09:34.742891  478863 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:09:34.742912  478863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:09:34.742990  478863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:09:34.743130  478863 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:09:34.743217  478863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:09:34.743268  478863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:09:34.743346  478863 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:09:34.743357  478863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:09:34.743385  478863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:09:34.743448  478863 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.newest-cni-718789 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-718789]
	I1017 20:09:35.474498  478863 provision.go:177] copyRemoteCerts
	I1017 20:09:35.474586  478863 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:09:35.474631  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:35.494515  478863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:35.605782  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:09:35.627787  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:09:35.651980  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:09:35.689162  478863 provision.go:87] duration metric: took 967.291658ms to configureAuth
	I1017 20:09:35.689188  478863 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:09:35.689424  478863 config.go:182] Loaded profile config "newest-cni-718789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:35.689562  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:35.709633  478863 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:35.709942  478863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I1017 20:09:35.709959  478863 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:09:36.074660  478863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:09:36.074682  478863 machine.go:96] duration metric: took 4.87782126s to provisionDockerMachine
	I1017 20:09:36.074693  478863 start.go:293] postStartSetup for "newest-cni-718789" (driver="docker")
	I1017 20:09:36.074704  478863 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:09:36.074768  478863 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:09:36.074825  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:36.118746  478863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:36.235391  478863 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:09:36.239578  478863 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:09:36.239604  478863 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:09:36.239615  478863 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:09:36.239675  478863 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:09:36.239751  478863 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:09:36.239852  478863 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:09:36.254658  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:09:36.277017  478863 start.go:296] duration metric: took 202.308914ms for postStartSetup
	I1017 20:09:36.277134  478863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:09:36.277214  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:36.295054  478863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:36.402939  478863 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:09:36.408417  478863 fix.go:56] duration metric: took 5.560177443s for fixHost
	I1017 20:09:36.408440  478863 start.go:83] releasing machines lock for "newest-cni-718789", held for 5.56023021s
	I1017 20:09:36.408633  478863 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-718789
	I1017 20:09:36.430691  478863 ssh_runner.go:195] Run: cat /version.json
	I1017 20:09:36.430756  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:36.431064  478863 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:09:36.431118  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:36.461051  478863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:36.477371  478863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:36.689768  478863 ssh_runner.go:195] Run: systemctl --version
	I1017 20:09:36.697908  478863 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:09:36.758475  478863 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:09:36.765077  478863 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:09:36.765150  478863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:09:36.774395  478863 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:09:36.774421  478863 start.go:495] detecting cgroup driver to use...
	I1017 20:09:36.774455  478863 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:09:36.774522  478863 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:09:36.792926  478863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:09:36.811375  478863 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:09:36.811488  478863 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:09:36.829024  478863 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:09:36.844083  478863 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:09:36.989394  478863 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:09:37.146617  478863 docker.go:234] disabling docker service ...
	I1017 20:09:37.146698  478863 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:09:37.166794  478863 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:09:37.181800  478863 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:09:37.344056  478863 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:09:37.505862  478863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:09:37.527051  478863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:09:37.547905  478863 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:09:37.547970  478863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:37.567009  478863 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:09:37.567094  478863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:37.582000  478863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:37.593370  478863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:37.605853  478863 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:09:37.618760  478863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:37.636664  478863 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:37.658100  478863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:37.667381  478863 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:09:37.679245  478863 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:09:37.688852  478863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:37.823674  478863 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:09:38.017075  478863 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:09:38.017142  478863 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:09:38.023058  478863 start.go:563] Will wait 60s for crictl version
	I1017 20:09:38.023136  478863 ssh_runner.go:195] Run: which crictl
	I1017 20:09:38.030335  478863 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:09:38.065994  478863 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:09:38.066076  478863 ssh_runner.go:195] Run: crio --version
	I1017 20:09:38.133818  478863 ssh_runner.go:195] Run: crio --version
	I1017 20:09:38.182606  478863 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:09:38.185542  478863 cli_runner.go:164] Run: docker network inspect newest-cni-718789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:09:38.212491  478863 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:09:38.217064  478863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:09:38.246940  478863 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 20:09:38.250076  478863 kubeadm.go:883] updating cluster {Name:newest-cni-718789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:09:38.250248  478863 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:38.250342  478863 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:09:38.291821  478863 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:09:38.291845  478863 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:09:38.291913  478863 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:09:38.328643  478863 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:09:38.328667  478863 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:09:38.328675  478863 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 20:09:38.328814  478863 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-718789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:09:38.328912  478863 ssh_runner.go:195] Run: crio config
	I1017 20:09:38.402601  478863 cni.go:84] Creating CNI manager for ""
	I1017 20:09:38.402625  478863 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:38.402677  478863 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 20:09:38.402710  478863 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-718789 NodeName:newest-cni-718789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:09:38.402885  478863 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-718789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:09:38.402968  478863 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:09:38.413277  478863 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:09:38.413384  478863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:09:38.422536  478863 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 20:09:38.441153  478863 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:09:38.463809  478863 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1017 20:09:38.478645  478863 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:09:38.483023  478863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:09:38.494062  478863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:38.664945  478863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:09:38.684973  478863 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789 for IP: 192.168.85.2
	I1017 20:09:38.685006  478863 certs.go:195] generating shared ca certs ...
	I1017 20:09:38.685062  478863 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:38.685260  478863 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:09:38.685340  478863 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:09:38.685355  478863 certs.go:257] generating profile certs ...
	I1017 20:09:38.685471  478863 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/client.key
	I1017 20:09:38.685572  478863 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.key.2d8ce425
	I1017 20:09:38.685655  478863 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.key
	I1017 20:09:38.685810  478863 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:09:38.685885  478863 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:09:38.685901  478863 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:09:38.685947  478863 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:09:38.685997  478863 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:09:38.686034  478863 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:09:38.686109  478863 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:09:38.686837  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:09:38.718873  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:09:38.759614  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:09:38.778023  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:09:38.797032  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 20:09:38.816090  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 20:09:38.842741  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:09:38.873494  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/newest-cni-718789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:09:38.891788  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:09:38.914283  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:09:38.933613  478863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:09:38.951302  478863 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:09:38.964170  478863 ssh_runner.go:195] Run: openssl version
	I1017 20:09:38.971256  478863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:09:38.980200  478863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:09:38.983884  478863 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:09:38.983948  478863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:09:39.029973  478863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:09:39.038022  478863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:09:39.046347  478863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:09:39.050062  478863 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:09:39.050164  478863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:09:39.091185  478863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:09:39.100221  478863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:09:39.108764  478863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:39.112586  478863 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:39.112692  478863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:39.153992  478863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:09:39.162203  478863 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:09:39.165881  478863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:09:39.207529  478863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:09:39.251141  478863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:09:39.294109  478863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:09:39.339658  478863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:09:39.388854  478863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:09:39.431515  478863 kubeadm.go:400] StartCluster: {Name:newest-cni-718789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-718789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:39.431652  478863 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:09:39.431748  478863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:09:39.526770  478863 cri.go:89] found id: ""
	I1017 20:09:39.526836  478863 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:09:39.536908  478863 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:09:39.536925  478863 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:09:39.536978  478863 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:09:39.547234  478863 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:09:39.547777  478863 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-718789" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:09:39.548015  478863 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-718789" cluster setting kubeconfig missing "newest-cni-718789" context setting]
	I1017 20:09:39.548431  478863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:39.550799  478863 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:09:39.576425  478863 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1017 20:09:39.576454  478863 kubeadm.go:601] duration metric: took 39.523179ms to restartPrimaryControlPlane
	I1017 20:09:39.576462  478863 kubeadm.go:402] duration metric: took 144.958608ms to StartCluster
	I1017 20:09:39.576479  478863 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:39.576549  478863 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:09:39.577404  478863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:39.579615  478863 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:09:39.579916  478863 config.go:182] Loaded profile config "newest-cni-718789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:39.579956  478863 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:09:39.580019  478863 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-718789"
	I1017 20:09:39.580033  478863 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-718789"
	W1017 20:09:39.580038  478863 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:09:39.580058  478863 host.go:66] Checking if "newest-cni-718789" exists ...
	I1017 20:09:39.580655  478863 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:39.580992  478863 addons.go:69] Setting dashboard=true in profile "newest-cni-718789"
	I1017 20:09:39.581012  478863 addons.go:238] Setting addon dashboard=true in "newest-cni-718789"
	W1017 20:09:39.581018  478863 addons.go:247] addon dashboard should already be in state true
	I1017 20:09:39.581040  478863 host.go:66] Checking if "newest-cni-718789" exists ...
	I1017 20:09:39.581444  478863 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:39.583927  478863 out.go:179] * Verifying Kubernetes components...
	I1017 20:09:39.584342  478863 addons.go:69] Setting default-storageclass=true in profile "newest-cni-718789"
	I1017 20:09:39.584498  478863 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-718789"
	I1017 20:09:39.589651  478863 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:39.590925  478863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:39.658542  478863 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:09:39.663894  478863 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:09:39.663918  478863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:09:39.663983  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:39.691090  478863 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 20:09:39.694499  478863 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 20:09:39.703425  478863 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 20:09:39.703459  478863 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 20:09:39.703527  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:39.707854  478863 addons.go:238] Setting addon default-storageclass=true in "newest-cni-718789"
	W1017 20:09:39.707875  478863 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:09:39.707899  478863 host.go:66] Checking if "newest-cni-718789" exists ...
	I1017 20:09:39.708320  478863 cli_runner.go:164] Run: docker container inspect newest-cni-718789 --format={{.State.Status}}
	I1017 20:09:39.725793  478863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:39.790315  478863 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:09:39.790335  478863 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:09:39.790403  478863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-718789
	I1017 20:09:39.795084  478863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:39.824700  478863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/newest-cni-718789/id_rsa Username:docker}
	I1017 20:09:40.157670  478863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:09:40.182431  478863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:09:40.210732  478863 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:09:40.210809  478863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:09:40.274353  478863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:09:40.334890  478863 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 20:09:40.334961  478863 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 20:09:40.419685  478863 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 20:09:40.419762  478863 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 20:09:40.473123  478863 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 20:09:40.473197  478863 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 20:09:40.525240  478863 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 20:09:40.525314  478863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 20:09:40.563966  478863 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 20:09:40.564043  478863 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 20:09:40.583702  478863 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 20:09:40.583781  478863 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 20:09:40.606133  478863 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 20:09:40.606251  478863 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 20:09:40.626096  478863 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 20:09:40.626190  478863 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 20:09:40.646614  478863 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:09:40.646679  478863 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 20:09:40.674993  478863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:09:46.150458  478863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.967996748s)
	I1017 20:09:46.150547  478863 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.939720616s)
	I1017 20:09:46.150570  478863 api_server.go:72] duration metric: took 6.570924665s to wait for apiserver process to appear ...
	I1017 20:09:46.150578  478863 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:09:46.150595  478863 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1017 20:09:46.150649  478863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.47558619s)
	I1017 20:09:46.150571  478863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.876145838s)
	I1017 20:09:46.153934  478863 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-718789 addons enable metrics-server
	
	I1017 20:09:46.176372  478863 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1017 20:09:46.178084  478863 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:09:46.178105  478863 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:09:46.179402  478863 addons.go:514] duration metric: took 6.599435113s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 20:09:46.650728  478863 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1017 20:09:46.658940  478863 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1017 20:09:46.660074  478863 api_server.go:141] control plane version: v1.34.1
	I1017 20:09:46.660098  478863 api_server.go:131] duration metric: took 509.514003ms to wait for apiserver health ...
	I1017 20:09:46.660119  478863 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:09:46.663433  478863 system_pods.go:59] 8 kube-system pods found
	I1017 20:09:46.663471  478863 system_pods.go:61] "coredns-66bc5c9577-6pm4f" [6b397048-b97c-490f-9af0-a896e0f0e9eb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:09:46.663510  478863 system_pods.go:61] "etcd-newest-cni-718789" [a1dfd64a-5104-4a5f-b417-07e968b5227b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:09:46.663526  478863 system_pods.go:61] "kindnet-lxdzb" [5f8a65f1-734c-4cc7-be69-7554cd4a7f07] Running
	I1017 20:09:46.663535  478863 system_pods.go:61] "kube-apiserver-newest-cni-718789" [aaa9a2d6-e322-4025-9c2c-3da21286ba0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:09:46.663542  478863 system_pods.go:61] "kube-controller-manager-newest-cni-718789" [804c53b6-55ab-459c-ab0b-4e8ec1dc8147] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:09:46.663552  478863 system_pods.go:61] "kube-proxy-s7gjc" [a08b3286-dc61-4ffc-8654-7be35ce377c6] Running
	I1017 20:09:46.663567  478863 system_pods.go:61] "kube-scheduler-newest-cni-718789" [1103386a-5132-4c74-a47c-f31ad50a8447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:09:46.663578  478863 system_pods.go:61] "storage-provisioner" [0da306ef-227b-4f5c-a44c-c7cab4716c98] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 20:09:46.663584  478863 system_pods.go:74] duration metric: took 3.459229ms to wait for pod list to return data ...
	I1017 20:09:46.663596  478863 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:09:46.665731  478863 default_sa.go:45] found service account: "default"
	I1017 20:09:46.665781  478863 default_sa.go:55] duration metric: took 2.176277ms for default service account to be created ...
	I1017 20:09:46.665807  478863 kubeadm.go:586] duration metric: took 7.086160466s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 20:09:46.665837  478863 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:09:46.668277  478863 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:09:46.668311  478863 node_conditions.go:123] node cpu capacity is 2
	I1017 20:09:46.668324  478863 node_conditions.go:105] duration metric: took 2.448689ms to run NodePressure ...
	I1017 20:09:46.668335  478863 start.go:241] waiting for startup goroutines ...
	I1017 20:09:46.668347  478863 start.go:246] waiting for cluster config update ...
	I1017 20:09:46.668362  478863 start.go:255] writing updated cluster config ...
	I1017 20:09:46.668680  478863 ssh_runner.go:195] Run: rm -f paused
	I1017 20:09:46.728333  478863 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:09:46.733832  478863 out.go:179] * Done! kubectl is now configured to use "newest-cni-718789" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.458701673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.469130953Z" level=info msg="Running pod sandbox: kube-system/kindnet-lxdzb/POD" id=85f960ef-cccf-4d8a-8597-3fa30db8e0ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.469210368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.476296264Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=85f960ef-cccf-4d8a-8597-3fa30db8e0ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.476570374Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=74d1a73b-e13e-4289-acf1-3596e55d4955 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.482594793Z" level=info msg="Ran pod sandbox 8c62d6d72d3796777ccc969d4fc43ce1ddd2758d4ca64a9905c7295470ba6943 with infra container: kube-system/kindnet-lxdzb/POD" id=85f960ef-cccf-4d8a-8597-3fa30db8e0ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.486336961Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6834d49c-dd32-4a31-8774-f96b002954c5 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.487305197Z" level=info msg="Ran pod sandbox 591f90dbd88abd87786a7ee65345f04129ed0ee785fad0e5621e4d0e3ebbb8fc with infra container: kube-system/kube-proxy-s7gjc/POD" id=74d1a73b-e13e-4289-acf1-3596e55d4955 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.487616639Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=42549a1e-8a4a-4de1-a182-3c5db8b69853 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.488930745Z" level=info msg="Creating container: kube-system/kindnet-lxdzb/kindnet-cni" id=99cbd71a-1696-4152-83ec-e342dcc32f7b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.489257817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.491763423Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6a8565dc-5018-4aac-b86c-886ddefd0b1e name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.498619762Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4c7ccc7e-ea60-411a-b5cf-7f95b631e2df name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.500885744Z" level=info msg="Creating container: kube-system/kube-proxy-s7gjc/kube-proxy" id=82b8c0a6-9c7a-41ae-9e5a-c947ba4d92df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.501601753Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.508123693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.513221746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.515126104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.517214918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.584695903Z" level=info msg="Created container a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d: kube-system/kube-proxy-s7gjc/kube-proxy" id=82b8c0a6-9c7a-41ae-9e5a-c947ba4d92df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.585319148Z" level=info msg="Starting container: a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d" id=4ec85b77-f2b3-4d07-b51c-1bd01d8f4640 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.588103278Z" level=info msg="Started container" PID=1060 containerID=a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d description=kube-system/kube-proxy-s7gjc/kube-proxy id=4ec85b77-f2b3-4d07-b51c-1bd01d8f4640 name=/runtime.v1.RuntimeService/StartContainer sandboxID=591f90dbd88abd87786a7ee65345f04129ed0ee785fad0e5621e4d0e3ebbb8fc
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.596824994Z" level=info msg="Created container 6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd: kube-system/kindnet-lxdzb/kindnet-cni" id=99cbd71a-1696-4152-83ec-e342dcc32f7b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.597642367Z" level=info msg="Starting container: 6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd" id=4b0086f7-e29f-4f06-96d3-24b0ae56246c name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.607059605Z" level=info msg="Started container" PID=1061 containerID=6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd description=kube-system/kindnet-lxdzb/kindnet-cni id=4b0086f7-e29f-4f06-96d3-24b0ae56246c name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c62d6d72d3796777ccc969d4fc43ce1ddd2758d4ca64a9905c7295470ba6943
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6431a4ca36b8e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   8c62d6d72d379       kindnet-lxdzb                               kube-system
	a9b7e45667850       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   591f90dbd88ab       kube-proxy-s7gjc                            kube-system
	fc8b1b886a881       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   db4eacd179b5b       kube-scheduler-newest-cni-718789            kube-system
	bf10220fe426e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   a76ae5d9c356a       kube-apiserver-newest-cni-718789            kube-system
	6ae81ee5a9647       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   bbb39a051ac1b       kube-controller-manager-newest-cni-718789   kube-system
	44d8e518daaf7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   02fadb3ddfd40       etcd-newest-cni-718789                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-718789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-718789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=newest-cni-718789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_09_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:09:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-718789
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:09:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:09:45 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:09:45 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:09:45 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 20:09:45 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-718789
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6401c5a6-7a14-4968-8d2b-14b1d23b2a13
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-718789                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-lxdzb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-718789             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-718789    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-s7gjc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-718789             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node newest-cni-718789 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 39s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 39s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node newest-cni-718789 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     39s (x8 over 39s)  kubelet          Node newest-cni-718789 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-718789 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-718789 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-718789 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-718789 event: Registered Node newest-cni-718789 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 13s)  kubelet          Node newest-cni-718789 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 13s)  kubelet          Node newest-cni-718789 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 13s)  kubelet          Node newest-cni-718789 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-718789 event: Registered Node newest-cni-718789 in Controller
	
	
	==> dmesg <==
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	[ +30.002292] overlayfs: idmapped layers are currently not supported
	[Oct17 20:08] overlayfs: idmapped layers are currently not supported
	[Oct17 20:09] overlayfs: idmapped layers are currently not supported
	[ +26.726183] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [44d8e518daaf7003deeb5318c8487caac4dc7e2dd9f5304c7652f42453d88c10] <==
	{"level":"warn","ts":"2025-10-17T20:09:43.571711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.591080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.606048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.619985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.636086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.651130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.665773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.681688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.698433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.716715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.732387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.749550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.764414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.778938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.794753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.810847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.831203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.852188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.860554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.881307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.891394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.918995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.937300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.961745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:44.022429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49768","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:09:51 up  2:52,  0 user,  load average: 3.66, 4.41, 3.39
	Linux newest-cni-718789 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd] <==
	I1017 20:09:45.719696       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:09:45.725657       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 20:09:45.725779       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:09:45.725791       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:09:45.725805       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:09:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:09:45.940433       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:09:45.940452       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:09:45.940461       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:09:45.940827       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [bf10220fe426e3e6e10f9b3b26eb7432ae81bc39b8d091cee13805fbf7585fb3] <==
	I1017 20:09:45.043643       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:09:45.043818       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:09:45.043864       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:09:45.076819       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:09:45.077099       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:09:45.077154       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:09:45.104902       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 20:09:45.105113       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:09:45.106141       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:09:45.106189       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:09:45.106198       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:09:45.106208       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:09:45.118387       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:09:45.125364       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1017 20:09:45.160496       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:09:45.582324       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:09:45.842241       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:09:45.931445       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:09:45.984812       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:09:46.012131       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:09:46.119303       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.13.213"}
	I1017 20:09:46.138078       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.13.101"}
	I1017 20:09:48.464881       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:09:48.567003       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:09:48.714349       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6ae81ee5a964746ee11924e4851ada6bbdad70b4d25601b3cb321aa3c2eafb58] <==
	I1017 20:09:48.134615       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:09:48.135173       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:09:48.135338       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:09:48.135689       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-718789"
	I1017 20:09:48.135775       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 20:09:48.136684       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:09:48.141183       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:09:48.143607       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:09:48.147845       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:09:48.150168       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:09:48.152449       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:09:48.153123       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:09:48.156605       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:09:48.156771       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:09:48.157868       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:09:48.158655       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:09:48.159122       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:09:48.163873       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:09:48.166879       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 20:09:48.167967       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:09:48.170190       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:09:48.231932       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:09:48.259552       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:09:48.259576       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:09:48.259584       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d] <==
	I1017 20:09:45.781187       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:09:46.106465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:09:46.232816       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:09:46.232850       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 20:09:46.232933       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:09:46.251483       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:09:46.251604       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:09:46.255571       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:09:46.255931       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:09:46.255954       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:09:46.260640       1 config.go:200] "Starting service config controller"
	I1017 20:09:46.260658       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:09:46.260681       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:09:46.260687       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:09:46.260699       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:09:46.260703       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:09:46.261412       1 config.go:309] "Starting node config controller"
	I1017 20:09:46.261431       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:09:46.261438       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:09:46.360928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:09:46.360968       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:09:46.360933       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fc8b1b886a8818d8867cb1f27b254636bf690f6338d52d794d2a5fe24e6afb17] <==
	I1017 20:09:41.665009       1 serving.go:386] Generated self-signed cert in-memory
	W1017 20:09:44.677043       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:09:44.677148       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:09:44.677182       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:09:44.677221       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:09:44.962908       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:09:44.962940       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:09:44.974424       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:09:44.974594       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:09:44.974622       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:09:44.974640       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:09:45.136729       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:09:44 newest-cni-718789 kubelet[731]: E1017 20:09:44.588030     731 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-718789\" not found" node="newest-cni-718789"
	Oct 17 20:09:44 newest-cni-718789 kubelet[731]: I1017 20:09:44.720268     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-718789"
	Oct 17 20:09:44 newest-cni-718789 kubelet[731]: I1017 20:09:44.805682     731 apiserver.go:52] "Watching apiserver"
	Oct 17 20:09:44 newest-cni-718789 kubelet[731]: I1017 20:09:44.944958     731 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.004791     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-lib-modules\") pod \"kindnet-lxdzb\" (UID: \"5f8a65f1-734c-4cc7-be69-7554cd4a7f07\") " pod="kube-system/kindnet-lxdzb"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.004845     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a08b3286-dc61-4ffc-8654-7be35ce377c6-xtables-lock\") pod \"kube-proxy-s7gjc\" (UID: \"a08b3286-dc61-4ffc-8654-7be35ce377c6\") " pod="kube-system/kube-proxy-s7gjc"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.004874     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-cni-cfg\") pod \"kindnet-lxdzb\" (UID: \"5f8a65f1-734c-4cc7-be69-7554cd4a7f07\") " pod="kube-system/kindnet-lxdzb"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.004895     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-xtables-lock\") pod \"kindnet-lxdzb\" (UID: \"5f8a65f1-734c-4cc7-be69-7554cd4a7f07\") " pod="kube-system/kindnet-lxdzb"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.004945     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08b3286-dc61-4ffc-8654-7be35ce377c6-lib-modules\") pod \"kube-proxy-s7gjc\" (UID: \"a08b3286-dc61-4ffc-8654-7be35ce377c6\") " pod="kube-system/kube-proxy-s7gjc"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.157323     731 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.186302     731 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.186434     731 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.186470     731 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.187462     731 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: E1017 20:09:45.192507     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-718789\" already exists" pod="kube-system/kube-controller-manager-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.192586     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: E1017 20:09:45.243944     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-718789\" already exists" pod="kube-system/kube-scheduler-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.244000     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: E1017 20:09:45.267465     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-718789\" already exists" pod="kube-system/etcd-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.267532     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: E1017 20:09:45.289855     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-718789\" already exists" pod="kube-system/kube-apiserver-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: W1017 20:09:45.484025     731 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/crio-591f90dbd88abd87786a7ee65345f04129ed0ee785fad0e5621e4d0e3ebbb8fc WatchSource:0}: Error finding container 591f90dbd88abd87786a7ee65345f04129ed0ee785fad0e5621e4d0e3ebbb8fc: Status 404 returned error can't find the container with id 591f90dbd88abd87786a7ee65345f04129ed0ee785fad0e5621e4d0e3ebbb8fc
	Oct 17 20:09:47 newest-cni-718789 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:09:47 newest-cni-718789 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:09:47 newest-cni-718789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718789 -n newest-cni-718789
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718789 -n newest-cni-718789: exit status 2 (510.270921ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-718789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6pm4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-q8bq2 kubernetes-dashboard-855c9754f9-xd4wx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-718789 describe pod coredns-66bc5c9577-6pm4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-q8bq2 kubernetes-dashboard-855c9754f9-xd4wx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-718789 describe pod coredns-66bc5c9577-6pm4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-q8bq2 kubernetes-dashboard-855c9754f9-xd4wx: exit status 1 (81.907408ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6pm4f" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-q8bq2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xd4wx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-718789 describe pod coredns-66bc5c9577-6pm4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-q8bq2 kubernetes-dashboard-855c9754f9-xd4wx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-718789
helpers_test.go:243: (dbg) docker inspect newest-cni-718789:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498",
	        "Created": "2025-10-17T20:08:54.624965091Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478992,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:09:30.904413019Z",
	            "FinishedAt": "2025-10-17T20:09:29.951688775Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/hostname",
	        "HostsPath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/hosts",
	        "LogPath": "/var/lib/docker/containers/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498-json.log",
	        "Name": "/newest-cni-718789",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-718789:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-718789",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498",
	                "LowerDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10560d65db01a75a4f3eeb4cd08a7e8876413ee4947ae1830f45d6bc860947dc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-718789",
	                "Source": "/var/lib/docker/volumes/newest-cni-718789/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-718789",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-718789",
	                "name.minikube.sigs.k8s.io": "newest-cni-718789",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e67feeda0c8cf126a71f6323a90ae68a221dd6c145b81a9e25acc874a184997f",
	            "SandboxKey": "/var/run/docker/netns/e67feeda0c8c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-718789": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:ea:d4:9b:0c:c2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8cd2eedf95aa208e706bcc7b2b128ff9ad782ac6990bd5bc75c6c1730d2dbe6",
	                    "EndpointID": "cca590f0987729eab7b9d66dbb1971148367276652b9c6862ce07628499c0c06",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-718789",
	                        "637fa246d690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718789 -n newest-cni-718789
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718789 -n newest-cni-718789: exit status 2 (346.165526ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-718789 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-718789 logs -n 25: (1.043369399s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-572724 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ stop    │ -p embed-certs-572724 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-572724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:08 UTC │
	│ image   │ no-preload-413711 image list --format=json                                                                                                                                                                                                    │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ pause   │ -p no-preload-413711 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-672422                                                                                                                                                                                                               │ disable-driver-mounts-672422 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:09 UTC │
	│ image   │ embed-certs-572724 image list --format=json                                                                                                                                                                                                   │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ pause   │ -p embed-certs-572724 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-718789 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-718789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-740780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-740780 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ image   │ newest-cni-718789 image list --format=json                                                                                                                                                                                                    │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ pause   │ -p newest-cni-718789 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-740780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:09:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:09:50.745417  481830 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:09:50.745569  481830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:50.745593  481830 out.go:374] Setting ErrFile to fd 2...
	I1017 20:09:50.745604  481830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:50.745950  481830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:09:50.746427  481830 out.go:368] Setting JSON to false
	I1017 20:09:50.747445  481830 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10342,"bootTime":1760721449,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:09:50.747566  481830 start.go:141] virtualization:  
	I1017 20:09:50.750924  481830 out.go:179] * [default-k8s-diff-port-740780] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:09:50.754891  481830 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:09:50.754994  481830 notify.go:220] Checking for updates...
	I1017 20:09:50.761248  481830 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:09:50.764221  481830 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:09:50.767063  481830 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:09:50.769854  481830 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:09:50.772843  481830 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:09:50.776134  481830 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:50.776699  481830 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:09:50.814030  481830 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:09:50.814144  481830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:50.886605  481830 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:09:50.877379935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:50.886715  481830 docker.go:318] overlay module found
	I1017 20:09:50.890112  481830 out.go:179] * Using the docker driver based on existing profile
	I1017 20:09:50.893873  481830 start.go:305] selected driver: docker
	I1017 20:09:50.893891  481830 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:50.893986  481830 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:09:50.894647  481830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:50.991020  481830 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:09:50.981425657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:50.991361  481830 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:09:50.991394  481830 cni.go:84] Creating CNI manager for ""
	I1017 20:09:50.991450  481830 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:50.991495  481830 start.go:349] cluster config:
	{Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:50.994580  481830 out.go:179] * Starting "default-k8s-diff-port-740780" primary control-plane node in "default-k8s-diff-port-740780" cluster
	I1017 20:09:50.997412  481830 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:09:51.000466  481830 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:09:51.004404  481830 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:51.004482  481830 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:09:51.004509  481830 cache.go:58] Caching tarball of preloaded images
	I1017 20:09:51.004610  481830 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:09:51.004888  481830 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:09:51.004911  481830 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:09:51.005031  481830 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/config.json ...
	I1017 20:09:51.039385  481830 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:09:51.039414  481830 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:09:51.039428  481830 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:09:51.039451  481830 start.go:360] acquireMachinesLock for default-k8s-diff-port-740780: {Name:mkb4281c63cf8ac1be83a7647fdf1335968a6b70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:51.039514  481830 start.go:364] duration metric: took 41.402µs to acquireMachinesLock for "default-k8s-diff-port-740780"
	I1017 20:09:51.039539  481830 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:09:51.039547  481830 fix.go:54] fixHost starting: 
	I1017 20:09:51.039822  481830 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:09:51.067389  481830 fix.go:112] recreateIfNeeded on default-k8s-diff-port-740780: state=Stopped err=<nil>
	W1017 20:09:51.067416  481830 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.458701673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.469130953Z" level=info msg="Running pod sandbox: kube-system/kindnet-lxdzb/POD" id=85f960ef-cccf-4d8a-8597-3fa30db8e0ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.469210368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.476296264Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=85f960ef-cccf-4d8a-8597-3fa30db8e0ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.476570374Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=74d1a73b-e13e-4289-acf1-3596e55d4955 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.482594793Z" level=info msg="Ran pod sandbox 8c62d6d72d3796777ccc969d4fc43ce1ddd2758d4ca64a9905c7295470ba6943 with infra container: kube-system/kindnet-lxdzb/POD" id=85f960ef-cccf-4d8a-8597-3fa30db8e0ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.486336961Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6834d49c-dd32-4a31-8774-f96b002954c5 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.487305197Z" level=info msg="Ran pod sandbox 591f90dbd88abd87786a7ee65345f04129ed0ee785fad0e5621e4d0e3ebbb8fc with infra container: kube-system/kube-proxy-s7gjc/POD" id=74d1a73b-e13e-4289-acf1-3596e55d4955 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.487616639Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=42549a1e-8a4a-4de1-a182-3c5db8b69853 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.488930745Z" level=info msg="Creating container: kube-system/kindnet-lxdzb/kindnet-cni" id=99cbd71a-1696-4152-83ec-e342dcc32f7b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.489257817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.491763423Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6a8565dc-5018-4aac-b86c-886ddefd0b1e name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.498619762Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4c7ccc7e-ea60-411a-b5cf-7f95b631e2df name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.500885744Z" level=info msg="Creating container: kube-system/kube-proxy-s7gjc/kube-proxy" id=82b8c0a6-9c7a-41ae-9e5a-c947ba4d92df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.501601753Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.508123693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.513221746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.515126104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.517214918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.584695903Z" level=info msg="Created container a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d: kube-system/kube-proxy-s7gjc/kube-proxy" id=82b8c0a6-9c7a-41ae-9e5a-c947ba4d92df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.585319148Z" level=info msg="Starting container: a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d" id=4ec85b77-f2b3-4d07-b51c-1bd01d8f4640 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.588103278Z" level=info msg="Started container" PID=1060 containerID=a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d description=kube-system/kube-proxy-s7gjc/kube-proxy id=4ec85b77-f2b3-4d07-b51c-1bd01d8f4640 name=/runtime.v1.RuntimeService/StartContainer sandboxID=591f90dbd88abd87786a7ee65345f04129ed0ee785fad0e5621e4d0e3ebbb8fc
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.596824994Z" level=info msg="Created container 6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd: kube-system/kindnet-lxdzb/kindnet-cni" id=99cbd71a-1696-4152-83ec-e342dcc32f7b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.597642367Z" level=info msg="Starting container: 6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd" id=4b0086f7-e29f-4f06-96d3-24b0ae56246c name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:09:45 newest-cni-718789 crio[613]: time="2025-10-17T20:09:45.607059605Z" level=info msg="Started container" PID=1061 containerID=6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd description=kube-system/kindnet-lxdzb/kindnet-cni id=4b0086f7-e29f-4f06-96d3-24b0ae56246c name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c62d6d72d3796777ccc969d4fc43ce1ddd2758d4ca64a9905c7295470ba6943
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6431a4ca36b8e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   8c62d6d72d379       kindnet-lxdzb                               kube-system
	a9b7e45667850       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   591f90dbd88ab       kube-proxy-s7gjc                            kube-system
	fc8b1b886a881       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   db4eacd179b5b       kube-scheduler-newest-cni-718789            kube-system
	bf10220fe426e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   a76ae5d9c356a       kube-apiserver-newest-cni-718789            kube-system
	6ae81ee5a9647       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   bbb39a051ac1b       kube-controller-manager-newest-cni-718789   kube-system
	44d8e518daaf7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   02fadb3ddfd40       etcd-newest-cni-718789                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-718789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-718789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=newest-cni-718789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_09_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:09:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-718789
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:09:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:09:45 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:09:45 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:09:45 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 20:09:45 +0000   Fri, 17 Oct 2025 20:09:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-718789
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                6401c5a6-7a14-4968-8d2b-14b1d23b2a13
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-718789                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-lxdzb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-718789             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-718789    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-s7gjc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-718789             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node newest-cni-718789 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 41s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 41s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node newest-cni-718789 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node newest-cni-718789 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-718789 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-718789 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-718789 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-718789 event: Registered Node newest-cni-718789 in Controller
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 15s)  kubelet          Node newest-cni-718789 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 15s)  kubelet          Node newest-cni-718789 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 15s)  kubelet          Node newest-cni-718789 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-718789 event: Registered Node newest-cni-718789 in Controller
	
	
	==> dmesg <==
	[ +18.070710] overlayfs: idmapped layers are currently not supported
	[Oct17 19:47] overlayfs: idmapped layers are currently not supported
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	[ +30.002292] overlayfs: idmapped layers are currently not supported
	[Oct17 20:08] overlayfs: idmapped layers are currently not supported
	[Oct17 20:09] overlayfs: idmapped layers are currently not supported
	[ +26.726183] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [44d8e518daaf7003deeb5318c8487caac4dc7e2dd9f5304c7652f42453d88c10] <==
	{"level":"warn","ts":"2025-10-17T20:09:43.571711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.591080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.606048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.619985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.636086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.651130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.665773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.681688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.698433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.716715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.732387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.749550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.764414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.778938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.794753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.810847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.831203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.852188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.860554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.881307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.891394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.918995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.937300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:43.961745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:09:44.022429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49768","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:09:53 up  2:52,  0 user,  load average: 3.37, 4.34, 3.37
	Linux newest-cni-718789 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6431a4ca36b8e096a4d06ad1d26b38875cf3ae65fc1ff050170be7170b38bcdd] <==
	I1017 20:09:45.719696       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:09:45.725657       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 20:09:45.725779       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:09:45.725791       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:09:45.725805       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:09:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:09:45.940433       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:09:45.940452       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:09:45.940461       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:09:45.940827       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [bf10220fe426e3e6e10f9b3b26eb7432ae81bc39b8d091cee13805fbf7585fb3] <==
	I1017 20:09:45.043643       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:09:45.043818       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:09:45.043864       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:09:45.076819       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:09:45.077099       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:09:45.077154       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:09:45.104902       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 20:09:45.105113       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 20:09:45.106141       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:09:45.106189       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:09:45.106198       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:09:45.106208       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:09:45.118387       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:09:45.125364       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1017 20:09:45.160496       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:09:45.582324       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:09:45.842241       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:09:45.931445       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:09:45.984812       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:09:46.012131       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:09:46.119303       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.13.213"}
	I1017 20:09:46.138078       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.13.101"}
	I1017 20:09:48.464881       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:09:48.567003       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:09:48.714349       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6ae81ee5a964746ee11924e4851ada6bbdad70b4d25601b3cb321aa3c2eafb58] <==
	I1017 20:09:48.134615       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:09:48.135173       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:09:48.135338       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 20:09:48.135689       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-718789"
	I1017 20:09:48.135775       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 20:09:48.136684       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:09:48.141183       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:09:48.143607       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:09:48.147845       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:09:48.150168       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:09:48.152449       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:09:48.153123       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:09:48.156605       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:09:48.156771       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:09:48.157868       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:09:48.158655       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:09:48.159122       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:09:48.163873       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:09:48.166879       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 20:09:48.167967       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:09:48.170190       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:09:48.231932       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:09:48.259552       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:09:48.259576       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:09:48.259584       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [a9b7e45667850ec74ca85981e8e7b537ee6dbe83ad9a4c14aac4d3006c8f931d] <==
	I1017 20:09:45.781187       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:09:46.106465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:09:46.232816       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:09:46.232850       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 20:09:46.232933       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:09:46.251483       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:09:46.251604       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:09:46.255571       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:09:46.255931       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:09:46.255954       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:09:46.260640       1 config.go:200] "Starting service config controller"
	I1017 20:09:46.260658       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:09:46.260681       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:09:46.260687       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:09:46.260699       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:09:46.260703       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:09:46.261412       1 config.go:309] "Starting node config controller"
	I1017 20:09:46.261431       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:09:46.261438       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:09:46.360928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:09:46.360968       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 20:09:46.360933       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fc8b1b886a8818d8867cb1f27b254636bf690f6338d52d794d2a5fe24e6afb17] <==
	I1017 20:09:41.665009       1 serving.go:386] Generated self-signed cert in-memory
	W1017 20:09:44.677043       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 20:09:44.677148       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 20:09:44.677182       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 20:09:44.677221       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 20:09:44.962908       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:09:44.962940       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:09:44.974424       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:09:44.974594       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:09:44.974622       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:09:44.974640       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:09:45.136729       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:09:44 newest-cni-718789 kubelet[731]: E1017 20:09:44.588030     731 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-718789\" not found" node="newest-cni-718789"
	Oct 17 20:09:44 newest-cni-718789 kubelet[731]: I1017 20:09:44.720268     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-718789"
	Oct 17 20:09:44 newest-cni-718789 kubelet[731]: I1017 20:09:44.805682     731 apiserver.go:52] "Watching apiserver"
	Oct 17 20:09:44 newest-cni-718789 kubelet[731]: I1017 20:09:44.944958     731 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.004791     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-lib-modules\") pod \"kindnet-lxdzb\" (UID: \"5f8a65f1-734c-4cc7-be69-7554cd4a7f07\") " pod="kube-system/kindnet-lxdzb"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.004845     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a08b3286-dc61-4ffc-8654-7be35ce377c6-xtables-lock\") pod \"kube-proxy-s7gjc\" (UID: \"a08b3286-dc61-4ffc-8654-7be35ce377c6\") " pod="kube-system/kube-proxy-s7gjc"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.004874     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-cni-cfg\") pod \"kindnet-lxdzb\" (UID: \"5f8a65f1-734c-4cc7-be69-7554cd4a7f07\") " pod="kube-system/kindnet-lxdzb"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.004895     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f8a65f1-734c-4cc7-be69-7554cd4a7f07-xtables-lock\") pod \"kindnet-lxdzb\" (UID: \"5f8a65f1-734c-4cc7-be69-7554cd4a7f07\") " pod="kube-system/kindnet-lxdzb"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.004945     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08b3286-dc61-4ffc-8654-7be35ce377c6-lib-modules\") pod \"kube-proxy-s7gjc\" (UID: \"a08b3286-dc61-4ffc-8654-7be35ce377c6\") " pod="kube-system/kube-proxy-s7gjc"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.157323     731 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.186302     731 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.186434     731 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.186470     731 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.187462     731 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: E1017 20:09:45.192507     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-718789\" already exists" pod="kube-system/kube-controller-manager-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.192586     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: E1017 20:09:45.243944     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-718789\" already exists" pod="kube-system/kube-scheduler-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.244000     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: E1017 20:09:45.267465     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-718789\" already exists" pod="kube-system/etcd-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: I1017 20:09:45.267532     731 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: E1017 20:09:45.289855     731 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-718789\" already exists" pod="kube-system/kube-apiserver-newest-cni-718789"
	Oct 17 20:09:45 newest-cni-718789 kubelet[731]: W1017 20:09:45.484025     731 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/637fa246d6909dfc6c1a180f72aba23105787417e436e48bf48fc3d704d4b498/crio-591f90dbd88abd87786a7ee65345f04129ed0ee785fad0e5621e4d0e3ebbb8fc WatchSource:0}: Error finding container 591f90dbd88abd87786a7ee65345f04129ed0ee785fad0e5621e4d0e3ebbb8fc: Status 404 returned error can't find the container with id 591f90dbd88abd87786a7ee65345f04129ed0ee785fad0e5621e4d0e3ebbb8fc
	Oct 17 20:09:47 newest-cni-718789 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:09:47 newest-cni-718789 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:09:47 newest-cni-718789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718789 -n newest-cni-718789
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-718789 -n newest-cni-718789: exit status 2 (350.611141ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-718789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6pm4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-q8bq2 kubernetes-dashboard-855c9754f9-xd4wx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-718789 describe pod coredns-66bc5c9577-6pm4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-q8bq2 kubernetes-dashboard-855c9754f9-xd4wx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-718789 describe pod coredns-66bc5c9577-6pm4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-q8bq2 kubernetes-dashboard-855c9754f9-xd4wx: exit status 1 (82.793104ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6pm4f" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-q8bq2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xd4wx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-718789 describe pod coredns-66bc5c9577-6pm4f storage-provisioner dashboard-metrics-scraper-6ffb444bf9-q8bq2 kubernetes-dashboard-855c9754f9-xd4wx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-740780 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-740780 --alsologtostderr -v=1: exit status 80 (2.089125003s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-740780 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:10:53.393891  487374 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:10:53.394065  487374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:10:53.394078  487374 out.go:374] Setting ErrFile to fd 2...
	I1017 20:10:53.394084  487374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:10:53.394372  487374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:10:53.394664  487374 out.go:368] Setting JSON to false
	I1017 20:10:53.394708  487374 mustload.go:65] Loading cluster: default-k8s-diff-port-740780
	I1017 20:10:53.395110  487374 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:10:53.395638  487374 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:10:53.415219  487374 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:10:53.415610  487374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:10:53.478453  487374 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 20:10:53.467861282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:10:53.479219  487374 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-740780 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:10:53.482532  487374 out.go:179] * Pausing node default-k8s-diff-port-740780 ... 
	I1017 20:10:53.485364  487374 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:10:53.485681  487374 ssh_runner.go:195] Run: systemctl --version
	I1017 20:10:53.485740  487374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:10:53.504782  487374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:10:53.607648  487374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:10:53.625516  487374 pause.go:52] kubelet running: true
	I1017 20:10:53.625581  487374 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:10:53.883954  487374 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:10:53.884047  487374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:10:53.959031  487374 cri.go:89] found id: "a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17"
	I1017 20:10:53.959055  487374 cri.go:89] found id: "3380c611e12dbdeaa42525e0a861b568befa9b96d862018967217894b34edf5b"
	I1017 20:10:53.959060  487374 cri.go:89] found id: "355f42b2d9e5ab8e9cc0398be0c31946c5fd5ef67f1542040bd152dc86fc9eaa"
	I1017 20:10:53.959064  487374 cri.go:89] found id: "ea08533626e7592fc61ba304cf97cd8eb64de0494753bde37a8b9d87caeca53f"
	I1017 20:10:53.959068  487374 cri.go:89] found id: "cb4d42676d8a4c718ff3906f4fcce605b5ee16ab93b39e0e2482f60b722be015"
	I1017 20:10:53.959071  487374 cri.go:89] found id: "a6b3e974b27e414682e8adc3b208f72c9f313b4733f18ab5f560bd7e238be80a"
	I1017 20:10:53.959075  487374 cri.go:89] found id: "7c7665546cb77975e68deac4ff243aa42b49d8525c2fc62e721424af6d1e6123"
	I1017 20:10:53.959078  487374 cri.go:89] found id: "a6caff41823275ad2cd049c0053ce5ae7602d4c363bc83b1fe7629a564b7ac54"
	I1017 20:10:53.959081  487374 cri.go:89] found id: "9b30d2deb9ae5ab342e2a970b00848a001b112b0bfa707783b0702db3735167d"
	I1017 20:10:53.959089  487374 cri.go:89] found id: "6e976958932ed0a771f2d17bd5b5b8abf05e910444ce5500a110d35836ac6690"
	I1017 20:10:53.959092  487374 cri.go:89] found id: "e4770227203260bbb9d237b5374c9f19a250d85735e375ebb88ac0f7f39647f1"
	I1017 20:10:53.959096  487374 cri.go:89] found id: ""
	I1017 20:10:53.959148  487374 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:10:53.978253  487374 retry.go:31] will retry after 259.319223ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:10:53Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:10:54.238698  487374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:10:54.251555  487374 pause.go:52] kubelet running: false
	I1017 20:10:54.251678  487374 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:10:54.444442  487374 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:10:54.444600  487374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:10:54.510244  487374 cri.go:89] found id: "a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17"
	I1017 20:10:54.510267  487374 cri.go:89] found id: "3380c611e12dbdeaa42525e0a861b568befa9b96d862018967217894b34edf5b"
	I1017 20:10:54.510274  487374 cri.go:89] found id: "355f42b2d9e5ab8e9cc0398be0c31946c5fd5ef67f1542040bd152dc86fc9eaa"
	I1017 20:10:54.510278  487374 cri.go:89] found id: "ea08533626e7592fc61ba304cf97cd8eb64de0494753bde37a8b9d87caeca53f"
	I1017 20:10:54.510281  487374 cri.go:89] found id: "cb4d42676d8a4c718ff3906f4fcce605b5ee16ab93b39e0e2482f60b722be015"
	I1017 20:10:54.510287  487374 cri.go:89] found id: "a6b3e974b27e414682e8adc3b208f72c9f313b4733f18ab5f560bd7e238be80a"
	I1017 20:10:54.510291  487374 cri.go:89] found id: "7c7665546cb77975e68deac4ff243aa42b49d8525c2fc62e721424af6d1e6123"
	I1017 20:10:54.510294  487374 cri.go:89] found id: "a6caff41823275ad2cd049c0053ce5ae7602d4c363bc83b1fe7629a564b7ac54"
	I1017 20:10:54.510318  487374 cri.go:89] found id: "9b30d2deb9ae5ab342e2a970b00848a001b112b0bfa707783b0702db3735167d"
	I1017 20:10:54.510326  487374 cri.go:89] found id: "6e976958932ed0a771f2d17bd5b5b8abf05e910444ce5500a110d35836ac6690"
	I1017 20:10:54.510333  487374 cri.go:89] found id: "e4770227203260bbb9d237b5374c9f19a250d85735e375ebb88ac0f7f39647f1"
	I1017 20:10:54.510336  487374 cri.go:89] found id: ""
	I1017 20:10:54.510394  487374 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:10:54.521922  487374 retry.go:31] will retry after 513.78459ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:10:54Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:10:55.037224  487374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:10:55.061977  487374 pause.go:52] kubelet running: false
	I1017 20:10:55.062113  487374 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:10:55.318214  487374 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:10:55.318315  487374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:10:55.392513  487374 cri.go:89] found id: "a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17"
	I1017 20:10:55.392568  487374 cri.go:89] found id: "3380c611e12dbdeaa42525e0a861b568befa9b96d862018967217894b34edf5b"
	I1017 20:10:55.392573  487374 cri.go:89] found id: "355f42b2d9e5ab8e9cc0398be0c31946c5fd5ef67f1542040bd152dc86fc9eaa"
	I1017 20:10:55.392578  487374 cri.go:89] found id: "ea08533626e7592fc61ba304cf97cd8eb64de0494753bde37a8b9d87caeca53f"
	I1017 20:10:55.392581  487374 cri.go:89] found id: "cb4d42676d8a4c718ff3906f4fcce605b5ee16ab93b39e0e2482f60b722be015"
	I1017 20:10:55.392585  487374 cri.go:89] found id: "a6b3e974b27e414682e8adc3b208f72c9f313b4733f18ab5f560bd7e238be80a"
	I1017 20:10:55.392589  487374 cri.go:89] found id: "7c7665546cb77975e68deac4ff243aa42b49d8525c2fc62e721424af6d1e6123"
	I1017 20:10:55.392592  487374 cri.go:89] found id: "a6caff41823275ad2cd049c0053ce5ae7602d4c363bc83b1fe7629a564b7ac54"
	I1017 20:10:55.392595  487374 cri.go:89] found id: "9b30d2deb9ae5ab342e2a970b00848a001b112b0bfa707783b0702db3735167d"
	I1017 20:10:55.392601  487374 cri.go:89] found id: "6e976958932ed0a771f2d17bd5b5b8abf05e910444ce5500a110d35836ac6690"
	I1017 20:10:55.392604  487374 cri.go:89] found id: "e4770227203260bbb9d237b5374c9f19a250d85735e375ebb88ac0f7f39647f1"
	I1017 20:10:55.392620  487374 cri.go:89] found id: ""
	I1017 20:10:55.392677  487374 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:10:55.407193  487374 out.go:203] 
	W1017 20:10:55.410123  487374 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:10:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:10:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:10:55.410148  487374 out.go:285] * 
	* 
	W1017 20:10:55.416712  487374 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:10:55.419594  487374 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-740780 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-740780
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-740780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395",
	        "Created": "2025-10-17T20:08:03.310435059Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482015,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:09:51.104940758Z",
	            "FinishedAt": "2025-10-17T20:09:50.10554158Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/hostname",
	        "HostsPath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/hosts",
	        "LogPath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395-json.log",
	        "Name": "/default-k8s-diff-port-740780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-740780:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-740780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395",
	                "LowerDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860/merged",
	                "UpperDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860/diff",
	                "WorkDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-740780",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-740780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-740780",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-740780",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-740780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c474177c3c955da8b4faf22f8c8b3b764d3744ea3ebbff477c861659d934c10c",
	            "SandboxKey": "/var/run/docker/netns/c474177c3c95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-740780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:2e:83:93:38:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b07c93b74eadee92a26c052eb44e638916a69f6583542a7473d7302a377567bf",
	                    "EndpointID": "7aeaf2acfcbf765d3e66830fa317364530db7f447a35c87d2ed1f65ee01cd2bf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-740780",
	                        "fedc9c1ddaae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780: exit status 2 (375.789098ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-740780 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-740780 logs -n 25: (1.402077944s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-413711 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-672422                                                                                                                                                                                                               │ disable-driver-mounts-672422 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:09 UTC │
	│ image   │ embed-certs-572724 image list --format=json                                                                                                                                                                                                   │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ pause   │ -p embed-certs-572724 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-718789 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-718789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-740780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-740780 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ image   │ newest-cni-718789 image list --format=json                                                                                                                                                                                                    │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ pause   │ -p newest-cni-718789 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-740780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ delete  │ -p newest-cni-718789                                                                                                                                                                                                                          │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p newest-cni-718789                                                                                                                                                                                                                          │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p auto-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-804622                  │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ image   │ default-k8s-diff-port-740780 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ pause   │ -p default-k8s-diff-port-740780 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:09:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:09:56.838710  483598 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:09:56.839315  483598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:56.839336  483598 out.go:374] Setting ErrFile to fd 2...
	I1017 20:09:56.839359  483598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:56.839640  483598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:09:56.840068  483598 out.go:368] Setting JSON to false
	I1017 20:09:56.841062  483598 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10348,"bootTime":1760721449,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:09:56.841154  483598 start.go:141] virtualization:  
	I1017 20:09:56.845132  483598 out.go:179] * [auto-804622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:09:56.849480  483598 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:09:56.849552  483598 notify.go:220] Checking for updates...
	I1017 20:09:56.855631  483598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:09:56.858780  483598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:09:56.862217  483598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:09:56.865259  483598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:09:56.868296  483598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:09:56.871810  483598 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:56.871984  483598 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:09:56.911004  483598 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:09:56.911126  483598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:56.997602  483598 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 20:09:56.981181733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:56.997701  483598 docker.go:318] overlay module found
	I1017 20:09:57.000918  483598 out.go:179] * Using the docker driver based on user configuration
	I1017 20:09:57.003825  483598 start.go:305] selected driver: docker
	I1017 20:09:57.003851  483598 start.go:925] validating driver "docker" against <nil>
	I1017 20:09:57.003884  483598 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:09:57.004709  483598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:57.091679  483598 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 20:09:57.081451048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:57.091914  483598 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:09:57.092157  483598 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:09:57.094982  483598 out.go:179] * Using Docker driver with root privileges
	I1017 20:09:57.097713  483598 cni.go:84] Creating CNI manager for ""
	I1017 20:09:57.097776  483598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:57.097785  483598 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:09:57.097858  483598 start.go:349] cluster config:
	{Name:auto-804622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-804622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1017 20:09:57.101433  483598 out.go:179] * Starting "auto-804622" primary control-plane node in "auto-804622" cluster
	I1017 20:09:57.106413  483598 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:09:57.109445  483598 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:09:57.112278  483598 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:57.112328  483598 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:09:57.112337  483598 cache.go:58] Caching tarball of preloaded images
	I1017 20:09:57.112432  483598 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:09:57.112441  483598 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:09:57.112570  483598 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/config.json ...
	I1017 20:09:57.112598  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/config.json: {Name:mkc2890a001174a0f307b41e739f2161f812a8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:57.112754  483598 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:09:57.148422  483598 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:09:57.148443  483598 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:09:57.148457  483598 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:09:57.148481  483598 start.go:360] acquireMachinesLock for auto-804622: {Name:mk1c90dcfd99f1024836dbf0db6cd464090d1b6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:57.148682  483598 start.go:364] duration metric: took 182.83µs to acquireMachinesLock for "auto-804622"
	I1017 20:09:57.148717  483598 start.go:93] Provisioning new machine with config: &{Name:auto-804622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-804622 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:09:57.148814  483598 start.go:125] createHost starting for "" (driver="docker")
	I1017 20:09:55.755764  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:09:55.773146  481830 provision.go:87] duration metric: took 747.84085ms to configureAuth
	I1017 20:09:55.773172  481830 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:09:55.773362  481830 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:55.773479  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:09:55.790344  481830 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:55.790702  481830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1017 20:09:55.790727  481830 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:09:56.152206  481830 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:09:56.152228  481830 machine.go:96] duration metric: took 4.707206512s to provisionDockerMachine
	I1017 20:09:56.152238  481830 start.go:293] postStartSetup for "default-k8s-diff-port-740780" (driver="docker")
	I1017 20:09:56.152249  481830 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:09:56.152325  481830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:09:56.152368  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:09:56.182425  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:09:56.292764  481830 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:09:56.296199  481830 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:09:56.296225  481830 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:09:56.296237  481830 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:09:56.296290  481830 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:09:56.296368  481830 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:09:56.296473  481830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:09:56.307206  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:09:56.364406  481830 start.go:296] duration metric: took 212.152543ms for postStartSetup
	I1017 20:09:56.364484  481830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:09:56.364577  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:09:56.383676  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:09:56.496993  481830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:09:56.502562  481830 fix.go:56] duration metric: took 5.463006588s for fixHost
	I1017 20:09:56.502590  481830 start.go:83] releasing machines lock for "default-k8s-diff-port-740780", held for 5.463057779s
	I1017 20:09:56.502696  481830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-740780
	I1017 20:09:56.523108  481830 ssh_runner.go:195] Run: cat /version.json
	I1017 20:09:56.523170  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:09:56.523453  481830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:09:56.523513  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:09:56.545531  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:09:56.568612  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:09:56.668115  481830 ssh_runner.go:195] Run: systemctl --version
	I1017 20:09:56.770094  481830 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:09:56.825505  481830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:09:56.830464  481830 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:09:56.830528  481830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:09:56.839184  481830 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:09:56.839209  481830 start.go:495] detecting cgroup driver to use...
	I1017 20:09:56.839239  481830 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:09:56.839285  481830 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:09:56.857024  481830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:09:56.871260  481830 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:09:56.871320  481830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:09:56.889444  481830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:09:56.907405  481830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:09:57.071160  481830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:09:57.213075  481830 docker.go:234] disabling docker service ...
	I1017 20:09:57.213146  481830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:09:57.232346  481830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:09:57.250253  481830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:09:57.404660  481830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:09:57.539223  481830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:09:57.554800  481830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:09:57.575049  481830 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:09:57.575217  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.587213  481830 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:09:57.587299  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.596661  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.605878  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.621062  481830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:09:57.633130  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.643049  481830 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.658478  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.668074  481830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:09:57.681112  481830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:09:57.689788  481830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:57.838194  481830 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:09:58.003167  481830 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:09:58.003273  481830 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:09:58.020814  481830 start.go:563] Will wait 60s for crictl version
	I1017 20:09:58.020878  481830 ssh_runner.go:195] Run: which crictl
	I1017 20:09:58.025670  481830 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:09:58.077946  481830 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:09:58.078046  481830 ssh_runner.go:195] Run: crio --version
	I1017 20:09:58.130993  481830 ssh_runner.go:195] Run: crio --version
	I1017 20:09:58.175256  481830 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:09:58.178109  481830 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-740780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:09:58.219053  481830 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 20:09:58.224446  481830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:09:58.239451  481830 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:09:58.239571  481830 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:58.239624  481830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:09:58.279809  481830 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:09:58.279835  481830 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:09:58.279890  481830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:09:58.324949  481830 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:09:58.324974  481830 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:09:58.324981  481830 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1017 20:09:58.325071  481830 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-740780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:09:58.325146  481830 ssh_runner.go:195] Run: crio config
	I1017 20:09:58.429652  481830 cni.go:84] Creating CNI manager for ""
	I1017 20:09:58.429676  481830 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:58.429700  481830 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:09:58.429722  481830 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-740780 NodeName:default-k8s-diff-port-740780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:09:58.429858  481830 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-740780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:09:58.429930  481830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:09:58.437859  481830 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:09:58.437935  481830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:09:58.445440  481830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 20:09:58.458452  481830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:09:58.470996  481830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1017 20:09:58.484149  481830 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:09:58.488053  481830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:09:58.497261  481830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:58.641285  481830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:09:58.657024  481830 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780 for IP: 192.168.76.2
	I1017 20:09:58.657045  481830 certs.go:195] generating shared ca certs ...
	I1017 20:09:58.657061  481830 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:58.657199  481830 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:09:58.657248  481830 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:09:58.657259  481830 certs.go:257] generating profile certs ...
	I1017 20:09:58.657353  481830 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.key
	I1017 20:09:58.657420  481830 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key.79d0c2c9
	I1017 20:09:58.657470  481830 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key
	I1017 20:09:58.657574  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:09:58.657612  481830 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:09:58.657628  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:09:58.657657  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:09:58.657682  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:09:58.657712  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:09:58.657755  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:09:58.658321  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:09:58.721621  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:09:58.762422  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:09:58.805695  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:09:58.856503  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:09:58.910502  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:09:58.950292  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:09:58.978158  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:09:58.997125  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:09:59.017323  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:09:59.050013  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:09:59.075595  481830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:09:59.088136  481830 ssh_runner.go:195] Run: openssl version
	I1017 20:09:59.094986  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:09:59.103318  481830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:09:59.107197  481830 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:09:59.107306  481830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:09:59.158844  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:09:59.168103  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:09:59.177338  481830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:09:59.182329  481830 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:09:59.182458  481830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:09:59.228784  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:09:59.239706  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:09:59.251840  481830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:59.256974  481830 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:59.257102  481830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:59.304863  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:09:59.314591  481830 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:09:59.319648  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:09:59.370217  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:09:59.488378  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:09:59.572408  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:09:59.679625  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:09:59.813954  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:09:59.942856  481830 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:59.943015  481830 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:09:59.943107  481830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:10:00.194336  481830 cri.go:89] found id: "a6b3e974b27e414682e8adc3b208f72c9f313b4733f18ab5f560bd7e238be80a"
	I1017 20:10:00.194425  481830 cri.go:89] found id: "7c7665546cb77975e68deac4ff243aa42b49d8525c2fc62e721424af6d1e6123"
	I1017 20:10:00.194478  481830 cri.go:89] found id: "a6caff41823275ad2cd049c0053ce5ae7602d4c363bc83b1fe7629a564b7ac54"
	I1017 20:10:00.194497  481830 cri.go:89] found id: "9b30d2deb9ae5ab342e2a970b00848a001b112b0bfa707783b0702db3735167d"
	I1017 20:10:00.194538  481830 cri.go:89] found id: ""
	I1017 20:10:00.194675  481830 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:10:00.312956  481830 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:10:00Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:10:00.313180  481830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:10:00.397419  481830 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:10:00.397502  481830 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:10:00.397605  481830 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:10:00.454981  481830 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:10:00.455455  481830 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-740780" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:10:00.455614  481830 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-740780" cluster setting kubeconfig missing "default-k8s-diff-port-740780" context setting]
	I1017 20:10:00.455972  481830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:00.457838  481830 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:10:00.484391  481830 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 20:10:00.484495  481830 kubeadm.go:601] duration metric: took 86.961245ms to restartPrimaryControlPlane
	I1017 20:10:00.484544  481830 kubeadm.go:402] duration metric: took 541.711264ms to StartCluster
	I1017 20:10:00.484586  481830 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:00.484708  481830 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:10:00.485457  481830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:00.486043  481830 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:10:00.486162  481830 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:10:00.486256  481830 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-740780"
	I1017 20:10:00.486273  481830 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-740780"
	W1017 20:10:00.486287  481830 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:10:00.486320  481830 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:10:00.487076  481830 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:10:00.487267  481830 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:10:00.487690  481830 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-740780"
	I1017 20:10:00.487711  481830 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-740780"
	W1017 20:10:00.487725  481830 addons.go:247] addon dashboard should already be in state true
	I1017 20:10:00.487761  481830 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:10:00.488231  481830 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:10:00.488802  481830 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-740780"
	I1017 20:10:00.488834  481830 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-740780"
	I1017 20:10:00.489152  481830 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:10:00.500064  481830 out.go:179] * Verifying Kubernetes components...
	I1017 20:10:00.540941  481830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:00.562660  481830 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 20:10:00.562731  481830 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:10:00.563748  481830 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-740780"
	W1017 20:10:00.563767  481830 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:10:00.563794  481830 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:10:00.564229  481830 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:10:00.572643  481830 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:00.572674  481830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:10:00.572755  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:10:00.576119  481830 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 20:10:00.579338  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 20:10:00.579365  481830 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 20:10:00.579455  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:10:00.607704  481830 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:00.607728  481830 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:10:00.607798  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:10:00.644272  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:10:00.656830  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:10:00.668803  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:09:57.152295  483598 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:09:57.152667  483598 start.go:159] libmachine.API.Create for "auto-804622" (driver="docker")
	I1017 20:09:57.152725  483598 client.go:168] LocalClient.Create starting
	I1017 20:09:57.152826  483598 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem
	I1017 20:09:57.152868  483598 main.go:141] libmachine: Decoding PEM data...
	I1017 20:09:57.152887  483598 main.go:141] libmachine: Parsing certificate...
	I1017 20:09:57.152958  483598 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem
	I1017 20:09:57.152985  483598 main.go:141] libmachine: Decoding PEM data...
	I1017 20:09:57.152998  483598 main.go:141] libmachine: Parsing certificate...
	I1017 20:09:57.153387  483598 cli_runner.go:164] Run: docker network inspect auto-804622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:09:57.175377  483598 cli_runner.go:211] docker network inspect auto-804622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:09:57.175452  483598 network_create.go:284] running [docker network inspect auto-804622] to gather additional debugging logs...
	I1017 20:09:57.175470  483598 cli_runner.go:164] Run: docker network inspect auto-804622
	W1017 20:09:57.203828  483598 cli_runner.go:211] docker network inspect auto-804622 returned with exit code 1
	I1017 20:09:57.203856  483598 network_create.go:287] error running [docker network inspect auto-804622]: docker network inspect auto-804622: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-804622 not found
	I1017 20:09:57.203870  483598 network_create.go:289] output of [docker network inspect auto-804622]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-804622 not found
	
	** /stderr **
	I1017 20:09:57.203972  483598 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:09:57.233942  483598 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f667d9c3ea2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:fc:1d:c6:d2:da} reservation:<nil>}
	I1017 20:09:57.234211  483598 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-82a22734829b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:5a:78:c5:e0:0a} reservation:<nil>}
	I1017 20:09:57.234560  483598 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b88bd3b523f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:75:74:cd:15:9b} reservation:<nil>}
	I1017 20:09:57.234848  483598 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b07c93b74ead IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:cc:0a:13:a9:64} reservation:<nil>}
	I1017 20:09:57.235258  483598 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cd820}
	I1017 20:09:57.235276  483598 network_create.go:124] attempt to create docker network auto-804622 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1017 20:09:57.235328  483598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-804622 auto-804622
	I1017 20:09:57.312098  483598 network_create.go:108] docker network auto-804622 192.168.85.0/24 created
	I1017 20:09:57.312134  483598 kic.go:121] calculated static IP "192.168.85.2" for the "auto-804622" container
	I1017 20:09:57.312202  483598 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:09:57.336932  483598 cli_runner.go:164] Run: docker volume create auto-804622 --label name.minikube.sigs.k8s.io=auto-804622 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:09:57.353217  483598 oci.go:103] Successfully created a docker volume auto-804622
	I1017 20:09:57.353289  483598 cli_runner.go:164] Run: docker run --rm --name auto-804622-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-804622 --entrypoint /usr/bin/test -v auto-804622:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:09:58.027826  483598 oci.go:107] Successfully prepared a docker volume auto-804622
	I1017 20:09:58.027863  483598 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:58.027882  483598 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:09:58.027949  483598 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-804622:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 20:10:01.011593  481830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:01.100730  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 20:10:01.100849  481830 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 20:10:01.224576  481830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:01.230249  481830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:10:01.268379  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 20:10:01.268402  481830 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 20:10:01.358546  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 20:10:01.358569  481830 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 20:10:01.508947  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 20:10:01.508969  481830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 20:10:01.590724  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 20:10:01.590803  481830 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 20:10:01.627613  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 20:10:01.627691  481830 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 20:10:01.742721  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 20:10:01.742813  481830 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 20:10:01.784602  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 20:10:01.784678  481830 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 20:10:01.821183  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:10:01.821260  481830 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 20:10:01.869194  481830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:10:03.447792  483598 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-804622:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.419807248s)
	I1017 20:10:03.447837  483598 kic.go:203] duration metric: took 5.419939249s to extract preloaded images to volume ...
	W1017 20:10:03.447965  483598 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 20:10:03.448072  483598 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:10:03.556000  483598 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-804622 --name auto-804622 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-804622 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-804622 --network auto-804622 --ip 192.168.85.2 --volume auto-804622:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:10:03.932411  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Running}}
	I1017 20:10:03.954511  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:03.984976  483598 cli_runner.go:164] Run: docker exec auto-804622 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:10:04.060117  483598 oci.go:144] the created container "auto-804622" has a running status.
	I1017 20:10:04.060153  483598 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa...
	I1017 20:10:04.244246  483598 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:10:04.269370  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:04.299623  483598 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:10:04.299647  483598 kic_runner.go:114] Args: [docker exec --privileged auto-804622 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:10:04.369832  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:04.397317  483598 machine.go:93] provisionDockerMachine start ...
	I1017 20:10:04.397434  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:04.421394  483598 main.go:141] libmachine: Using SSH client type: native
	I1017 20:10:04.421730  483598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1017 20:10:04.421740  483598 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:10:04.422493  483598 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33878->127.0.0.1:33460: read: connection reset by peer
	I1017 20:10:08.368503  481830 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.356821294s)
	I1017 20:10:08.368571  481830 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.13825533s)
	I1017 20:10:08.368602  481830 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-740780" to be "Ready" ...
	I1017 20:10:08.368933  481830 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.144284155s)
	I1017 20:10:08.442035  481830 node_ready.go:49] node "default-k8s-diff-port-740780" is "Ready"
	I1017 20:10:08.442113  481830 node_ready.go:38] duration metric: took 73.498143ms for node "default-k8s-diff-port-740780" to be "Ready" ...
	I1017 20:10:08.442142  481830 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:10:08.442238  481830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:10:08.513321  481830 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.643994627s)
	I1017 20:10:08.513585  481830 api_server.go:72] duration metric: took 8.026258589s to wait for apiserver process to appear ...
	I1017 20:10:08.513644  481830 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:10:08.513677  481830 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1017 20:10:08.516440  481830 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-740780 addons enable metrics-server
	
	I1017 20:10:08.519350  481830 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1017 20:10:08.522255  481830 addons.go:514] duration metric: took 8.036066093s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1017 20:10:08.526204  481830 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1017 20:10:08.528006  481830 api_server.go:141] control plane version: v1.34.1
	I1017 20:10:08.528027  481830 api_server.go:131] duration metric: took 14.363459ms to wait for apiserver health ...
	I1017 20:10:08.528035  481830 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:10:08.532822  481830 system_pods.go:59] 8 kube-system pods found
	I1017 20:10:08.532903  481830 system_pods.go:61] "coredns-66bc5c9577-6mknt" [15647d52-61fb-4af6-8d28-66da6ebd0923] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:10:08.532929  481830 system_pods.go:61] "etcd-default-k8s-diff-port-740780" [6a636316-c994-44d8-b608-0c1cfa06bd55] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:10:08.532951  481830 system_pods.go:61] "kindnet-fnx26" [16e1d707-7d88-4317-ab9f-dd7698ee1cd1] Running
	I1017 20:10:08.532985  481830 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-740780" [7e36f4e9-953c-457d-b6bf-b26ac987ab87] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:10:08.533009  481830 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-740780" [9e5bfd14-bb31-4668-a9db-6278ca49ae54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:10:08.533031  481830 system_pods.go:61] "kube-proxy-8x772" [19f55ff7-64eb-4407-9168-aa18ddbe543c] Running
	I1017 20:10:08.533062  481830 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-740780" [44223246-1f61-4365-98a5-c3820458e28a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:10:08.533081  481830 system_pods.go:61] "storage-provisioner" [f0266236-3025-407f-ae0f-c4e9e5ae8ff0] Running
	I1017 20:10:08.533104  481830 system_pods.go:74] duration metric: took 5.063034ms to wait for pod list to return data ...
	I1017 20:10:08.533134  481830 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:10:08.536010  481830 default_sa.go:45] found service account: "default"
	I1017 20:10:08.536079  481830 default_sa.go:55] duration metric: took 2.92651ms for default service account to be created ...
	I1017 20:10:08.536103  481830 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:10:08.539843  481830 system_pods.go:86] 8 kube-system pods found
	I1017 20:10:08.539937  481830 system_pods.go:89] "coredns-66bc5c9577-6mknt" [15647d52-61fb-4af6-8d28-66da6ebd0923] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:10:08.539971  481830 system_pods.go:89] "etcd-default-k8s-diff-port-740780" [6a636316-c994-44d8-b608-0c1cfa06bd55] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:10:08.540006  481830 system_pods.go:89] "kindnet-fnx26" [16e1d707-7d88-4317-ab9f-dd7698ee1cd1] Running
	I1017 20:10:08.540029  481830 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-740780" [7e36f4e9-953c-457d-b6bf-b26ac987ab87] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:10:08.540062  481830 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-740780" [9e5bfd14-bb31-4668-a9db-6278ca49ae54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:10:08.540092  481830 system_pods.go:89] "kube-proxy-8x772" [19f55ff7-64eb-4407-9168-aa18ddbe543c] Running
	I1017 20:10:08.540116  481830 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-740780" [44223246-1f61-4365-98a5-c3820458e28a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:10:08.540136  481830 system_pods.go:89] "storage-provisioner" [f0266236-3025-407f-ae0f-c4e9e5ae8ff0] Running
	I1017 20:10:08.540169  481830 system_pods.go:126] duration metric: took 4.046881ms to wait for k8s-apps to be running ...
	I1017 20:10:08.540190  481830 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:10:08.540267  481830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:10:08.560287  481830 system_svc.go:56] duration metric: took 20.087185ms WaitForService to wait for kubelet
	I1017 20:10:08.560362  481830 kubeadm.go:586] duration metric: took 8.073035536s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:10:08.560399  481830 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:10:08.563785  481830 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:10:08.563848  481830 node_conditions.go:123] node cpu capacity is 2
	I1017 20:10:08.563885  481830 node_conditions.go:105] duration metric: took 3.462839ms to run NodePressure ...
	I1017 20:10:08.563914  481830 start.go:241] waiting for startup goroutines ...
	I1017 20:10:08.563944  481830 start.go:246] waiting for cluster config update ...
	I1017 20:10:08.563969  481830 start.go:255] writing updated cluster config ...
	I1017 20:10:08.564303  481830 ssh_runner.go:195] Run: rm -f paused
	I1017 20:10:08.576907  481830 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:10:08.584960  481830 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6mknt" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:10:10.602945  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:07.616382  483598 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-804622
	
	I1017 20:10:07.616456  483598 ubuntu.go:182] provisioning hostname "auto-804622"
	I1017 20:10:07.616556  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:07.639674  483598 main.go:141] libmachine: Using SSH client type: native
	I1017 20:10:07.639984  483598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1017 20:10:07.639995  483598 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-804622 && echo "auto-804622" | sudo tee /etc/hostname
	I1017 20:10:07.848287  483598 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-804622
	
	I1017 20:10:07.848417  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:07.876391  483598 main.go:141] libmachine: Using SSH client type: native
	I1017 20:10:07.876769  483598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1017 20:10:07.876790  483598 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-804622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-804622/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-804622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:10:08.053186  483598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:10:08.053215  483598 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:10:08.053272  483598 ubuntu.go:190] setting up certificates
	I1017 20:10:08.053283  483598 provision.go:84] configureAuth start
	I1017 20:10:08.053369  483598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-804622
	I1017 20:10:08.093086  483598 provision.go:143] copyHostCerts
	I1017 20:10:08.093162  483598 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:10:08.093178  483598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:10:08.093252  483598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:10:08.093352  483598 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:10:08.093363  483598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:10:08.093393  483598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:10:08.093464  483598 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:10:08.093475  483598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:10:08.093501  483598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:10:08.093562  483598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.auto-804622 san=[127.0.0.1 192.168.85.2 auto-804622 localhost minikube]
	I1017 20:10:08.571761  483598 provision.go:177] copyRemoteCerts
	I1017 20:10:08.571834  483598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:10:08.571883  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:08.599920  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:08.705603  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:10:08.723989  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1017 20:10:08.745559  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:10:08.766772  483598 provision.go:87] duration metric: took 713.460394ms to configureAuth
	I1017 20:10:08.766800  483598 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:10:08.766985  483598 config.go:182] Loaded profile config "auto-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:10:08.767101  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:08.787352  483598 main.go:141] libmachine: Using SSH client type: native
	I1017 20:10:08.787739  483598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1017 20:10:08.787757  483598 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:10:09.054326  483598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:10:09.054408  483598 machine.go:96] duration metric: took 4.657060083s to provisionDockerMachine
	I1017 20:10:09.054441  483598 client.go:171] duration metric: took 11.901704674s to LocalClient.Create
	I1017 20:10:09.054495  483598 start.go:167] duration metric: took 11.901830136s to libmachine.API.Create "auto-804622"
	I1017 20:10:09.054523  483598 start.go:293] postStartSetup for "auto-804622" (driver="docker")
	I1017 20:10:09.054551  483598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:10:09.054654  483598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:10:09.054731  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:09.075470  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:09.185528  483598 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:10:09.190482  483598 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:10:09.190513  483598 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:10:09.190524  483598 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:10:09.190581  483598 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:10:09.190663  483598 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:10:09.190765  483598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:10:09.200651  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:10:09.222990  483598 start.go:296] duration metric: took 168.434432ms for postStartSetup
	I1017 20:10:09.223365  483598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-804622
	I1017 20:10:09.246198  483598 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/config.json ...
	I1017 20:10:09.246489  483598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:10:09.246544  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:09.278376  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:09.390494  483598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:10:09.395463  483598 start.go:128] duration metric: took 12.246633406s to createHost
	I1017 20:10:09.395490  483598 start.go:83] releasing machines lock for "auto-804622", held for 12.246794559s
	I1017 20:10:09.395570  483598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-804622
	I1017 20:10:09.415545  483598 ssh_runner.go:195] Run: cat /version.json
	I1017 20:10:09.415610  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:09.415888  483598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:10:09.415949  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:09.436656  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:09.444809  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:09.540452  483598 ssh_runner.go:195] Run: systemctl --version
	I1017 20:10:09.633451  483598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:10:09.674603  483598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:10:09.679374  483598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:10:09.679468  483598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:10:09.711973  483598 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 20:10:09.711997  483598 start.go:495] detecting cgroup driver to use...
	I1017 20:10:09.712032  483598 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:10:09.712087  483598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:10:09.731298  483598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:10:09.743984  483598 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:10:09.744099  483598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:10:09.768843  483598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:10:09.797081  483598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:10:09.934284  483598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:10:10.075750  483598 docker.go:234] disabling docker service ...
	I1017 20:10:10.075880  483598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:10:10.113984  483598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:10:10.129734  483598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:10:10.275530  483598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:10:10.405168  483598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:10:10.426617  483598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:10:10.447190  483598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:10:10.447294  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.457453  483598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:10:10.457591  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.477563  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.494244  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.505740  483598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:10:10.518959  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.528770  483598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.543216  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.552807  483598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:10:10.560345  483598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:10:10.568064  483598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:10.700878  483598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:10:10.836357  483598 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:10:10.836427  483598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:10:10.841097  483598 start.go:563] Will wait 60s for crictl version
	I1017 20:10:10.841206  483598 ssh_runner.go:195] Run: which crictl
	I1017 20:10:10.844752  483598 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:10:10.872059  483598 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:10:10.872215  483598 ssh_runner.go:195] Run: crio --version
	I1017 20:10:10.902009  483598 ssh_runner.go:195] Run: crio --version
	I1017 20:10:10.937547  483598 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:10:10.940454  483598 cli_runner.go:164] Run: docker network inspect auto-804622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:10:10.956318  483598 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:10:10.961556  483598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:10:10.980682  483598 kubeadm.go:883] updating cluster {Name:auto-804622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-804622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:10:10.980800  483598 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:10:10.980860  483598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:10:11.023983  483598 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:10:11.024011  483598 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:10:11.024068  483598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:10:11.052266  483598 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:10:11.052294  483598 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:10:11.052304  483598 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 20:10:11.052453  483598 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-804622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-804622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:10:11.052570  483598 ssh_runner.go:195] Run: crio config
	I1017 20:10:11.128399  483598 cni.go:84] Creating CNI manager for ""
	I1017 20:10:11.128426  483598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:10:11.128444  483598 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:10:11.128490  483598 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-804622 NodeName:auto-804622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:10:11.128724  483598 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-804622"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:10:11.128808  483598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:10:11.141489  483598 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:10:11.141568  483598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:10:11.150433  483598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1017 20:10:11.164794  483598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:10:11.180991  483598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1017 20:10:11.210609  483598 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:10:11.214568  483598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:10:11.231554  483598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:11.366734  483598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:10:11.384339  483598 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622 for IP: 192.168.85.2
	I1017 20:10:11.384409  483598 certs.go:195] generating shared ca certs ...
	I1017 20:10:11.384456  483598 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:11.384673  483598 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:10:11.384761  483598 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:10:11.384786  483598 certs.go:257] generating profile certs ...
	I1017 20:10:11.384860  483598 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.key
	I1017 20:10:11.384899  483598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.crt with IP's: []
	I1017 20:10:11.634971  483598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.crt ...
	I1017 20:10:11.635050  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.crt: {Name:mk17d77eb2a35743ef5ae244f9ae9da67a7eeb56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:11.635286  483598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.key ...
	I1017 20:10:11.635323  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.key: {Name:mk01f927dbb1ecf78c0d4b86082e14a79ab64245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:11.635464  483598 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key.77a2ba55
	I1017 20:10:11.635507  483598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt.77a2ba55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 20:10:12.587127  483598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt.77a2ba55 ...
	I1017 20:10:12.587200  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt.77a2ba55: {Name:mk0c077d35bd5a3ed6e2edf2bd8d9c1937b551f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:12.587394  483598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key.77a2ba55 ...
	I1017 20:10:12.587435  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key.77a2ba55: {Name:mkb73a1db540eb0cb0001ef06da90f6bb834a09a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:12.587546  483598 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt.77a2ba55 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt
	I1017 20:10:12.587663  483598 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key.77a2ba55 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key
	I1017 20:10:12.587769  483598 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.key
	I1017 20:10:12.587832  483598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.crt with IP's: []
	I1017 20:10:12.806386  483598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.crt ...
	I1017 20:10:12.806458  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.crt: {Name:mk087fbb1670990a7ad9f61450044d9c39ce1004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:12.806677  483598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.key ...
	I1017 20:10:12.806713  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.key: {Name:mk128f7fb01dfc3b3add3970a0996453a29ad62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:12.806944  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:10:12.807010  483598 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:10:12.807042  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:10:12.807089  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:10:12.807146  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:10:12.807192  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:10:12.807270  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:10:12.807932  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:10:12.841392  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:10:12.864067  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:10:12.888939  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:10:12.914271  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1017 20:10:12.938373  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:10:12.963437  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:10:12.987152  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:10:13.012641  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:10:13.037465  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:10:13.067944  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:10:13.102859  483598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:10:13.120868  483598 ssh_runner.go:195] Run: openssl version
	I1017 20:10:13.130063  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:10:13.141067  483598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:10:13.150532  483598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:10:13.150654  483598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:10:13.197054  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:10:13.208061  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:10:13.218755  483598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:10:13.223447  483598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:10:13.223517  483598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:10:13.273768  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:10:13.296226  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:10:13.312365  483598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:10:13.324956  483598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:10:13.325040  483598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:10:13.410860  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:10:13.426533  483598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:10:13.431183  483598 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:10:13.431239  483598 kubeadm.go:400] StartCluster: {Name:auto-804622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-804622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:10:13.431322  483598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:10:13.431386  483598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:10:13.479632  483598 cri.go:89] found id: ""
	I1017 20:10:13.479718  483598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:10:13.497269  483598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:10:13.510771  483598 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:10:13.510835  483598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:10:13.520125  483598 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:10:13.520196  483598 kubeadm.go:157] found existing configuration files:
	
	I1017 20:10:13.520280  483598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:10:13.534637  483598 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:10:13.534779  483598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:10:13.547079  483598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:10:13.556956  483598 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:10:13.557072  483598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:10:13.564870  483598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:10:13.574365  483598 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:10:13.574494  483598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:10:13.582698  483598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:10:13.597219  483598 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:10:13.597348  483598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:10:13.605909  483598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:10:13.663458  483598 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:10:13.663901  483598 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:10:13.694724  483598 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:10:13.694886  483598 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 20:10:13.694965  483598 kubeadm.go:318] OS: Linux
	I1017 20:10:13.695056  483598 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:10:13.695139  483598 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 20:10:13.695224  483598 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:10:13.695310  483598 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:10:13.695396  483598 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:10:13.695486  483598 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:10:13.695610  483598 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:10:13.695695  483598 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:10:13.695781  483598 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 20:10:13.790691  483598 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:10:13.790879  483598 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:10:13.791015  483598 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:10:13.800909  483598 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1017 20:10:13.100144  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:15.591766  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:13.808096  483598 out.go:252]   - Generating certificates and keys ...
	I1017 20:10:13.808293  483598 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:10:13.808399  483598 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:10:14.278206  483598 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:10:14.548085  483598 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:10:14.712660  483598 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:10:15.942607  483598 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1017 20:10:17.591875  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:19.592891  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:17.686018  483598 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:10:17.686644  483598 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-804622 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:10:18.058357  483598 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:10:18.059073  483598 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-804622 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:10:18.594021  483598 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:10:18.848323  483598 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:10:19.192894  483598 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:10:19.193058  483598 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:10:19.376921  483598 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:10:19.771813  483598 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:10:21.122360  483598 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:10:22.652641  483598 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:10:23.389128  483598 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:10:23.390076  483598 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:10:23.393391  483598 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1017 20:10:21.593220  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:23.594611  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:25.595758  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:23.398826  483598 out.go:252]   - Booting up control plane ...
	I1017 20:10:23.398937  483598 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:10:23.399019  483598 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:10:23.399263  483598 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:10:23.436258  483598 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:10:23.436463  483598 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:10:23.446158  483598 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:10:23.446373  483598 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:10:23.446458  483598 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:10:23.617482  483598 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:10:23.617689  483598 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:10:25.120850  483598 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501931561s
	I1017 20:10:25.123302  483598 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:10:25.123706  483598 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1017 20:10:25.124076  483598 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:10:25.125053  483598 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1017 20:10:28.090729  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:30.096207  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:29.343008  483598 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.217459766s
	I1017 20:10:30.412362  483598 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.286566295s
	I1017 20:10:32.126545  483598 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002057342s
	I1017 20:10:32.146970  483598 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:10:32.161792  483598 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:10:32.176821  483598 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:10:32.177065  483598 kubeadm.go:318] [mark-control-plane] Marking the node auto-804622 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:10:32.188825  483598 kubeadm.go:318] [bootstrap-token] Using token: arqy1z.6dykx1ylb9hfjatw
	I1017 20:10:32.191931  483598 out.go:252]   - Configuring RBAC rules ...
	I1017 20:10:32.192060  483598 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:10:32.196316  483598 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:10:32.205820  483598 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:10:32.212670  483598 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:10:32.219939  483598 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:10:32.225874  483598 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:10:32.534499  483598 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:10:32.976613  483598 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:10:33.532901  483598 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:10:33.534300  483598 kubeadm.go:318] 
	I1017 20:10:33.534384  483598 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:10:33.534395  483598 kubeadm.go:318] 
	I1017 20:10:33.534476  483598 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:10:33.534480  483598 kubeadm.go:318] 
	I1017 20:10:33.534507  483598 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:10:33.534569  483598 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:10:33.534622  483598 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:10:33.534627  483598 kubeadm.go:318] 
	I1017 20:10:33.534691  483598 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:10:33.534696  483598 kubeadm.go:318] 
	I1017 20:10:33.534756  483598 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:10:33.534761  483598 kubeadm.go:318] 
	I1017 20:10:33.534815  483598 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:10:33.534893  483598 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:10:33.534964  483598 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:10:33.534969  483598 kubeadm.go:318] 
	I1017 20:10:33.535056  483598 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:10:33.535137  483598 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:10:33.535141  483598 kubeadm.go:318] 
	I1017 20:10:33.535229  483598 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token arqy1z.6dykx1ylb9hfjatw \
	I1017 20:10:33.535336  483598 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 \
	I1017 20:10:33.535359  483598 kubeadm.go:318] 	--control-plane 
	I1017 20:10:33.535364  483598 kubeadm.go:318] 
	I1017 20:10:33.535452  483598 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:10:33.535457  483598 kubeadm.go:318] 
	I1017 20:10:33.535542  483598 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token arqy1z.6dykx1ylb9hfjatw \
	I1017 20:10:33.535648  483598 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 
	I1017 20:10:33.540597  483598 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 20:10:33.540839  483598 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 20:10:33.540952  483598 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:10:33.540975  483598 cni.go:84] Creating CNI manager for ""
	I1017 20:10:33.540987  483598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:10:33.544258  483598 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1017 20:10:32.590961  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:35.090992  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:33.547050  483598 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:10:33.552655  483598 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:10:33.552675  483598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:10:33.577088  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:10:33.996115  483598 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:10:33.996247  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:33.996325  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-804622 minikube.k8s.io/updated_at=2025_10_17T20_10_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=auto-804622 minikube.k8s.io/primary=true
	I1017 20:10:34.331211  483598 ops.go:34] apiserver oom_adj: -16
	I1017 20:10:34.331316  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:34.831681  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:35.331970  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:35.832065  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:36.332337  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:36.831469  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:37.332116  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:37.831558  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:38.331826  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:38.462461  483598 kubeadm.go:1113] duration metric: took 4.466260221s to wait for elevateKubeSystemPrivileges
	I1017 20:10:38.462493  483598 kubeadm.go:402] duration metric: took 25.031257355s to StartCluster
	I1017 20:10:38.462513  483598 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:38.462600  483598 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:10:38.463542  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:38.463764  483598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:10:38.463770  483598 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:10:38.464025  483598 config.go:182] Loaded profile config "auto-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:10:38.464072  483598 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:10:38.464136  483598 addons.go:69] Setting storage-provisioner=true in profile "auto-804622"
	I1017 20:10:38.464151  483598 addons.go:238] Setting addon storage-provisioner=true in "auto-804622"
	I1017 20:10:38.464180  483598 host.go:66] Checking if "auto-804622" exists ...
	I1017 20:10:38.464643  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:38.465033  483598 addons.go:69] Setting default-storageclass=true in profile "auto-804622"
	I1017 20:10:38.465050  483598 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-804622"
	I1017 20:10:38.465310  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:38.468691  483598 out.go:179] * Verifying Kubernetes components...
	I1017 20:10:38.476746  483598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:38.506532  483598 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:10:38.507961  483598 addons.go:238] Setting addon default-storageclass=true in "auto-804622"
	I1017 20:10:38.507993  483598 host.go:66] Checking if "auto-804622" exists ...
	I1017 20:10:38.508394  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:38.510635  483598 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:38.510654  483598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:10:38.510715  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:38.533710  483598 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:38.533730  483598 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:10:38.533794  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:38.555675  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:38.566309  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:38.766225  483598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:10:38.841105  483598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:10:38.927636  483598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:38.943830  483598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:39.423715  483598 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1017 20:10:39.425731  483598 node_ready.go:35] waiting up to 15m0s for node "auto-804622" to be "Ready" ...
	I1017 20:10:39.822803  483598 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1017 20:10:37.590247  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:39.590566  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:40.090980  481830 pod_ready.go:94] pod "coredns-66bc5c9577-6mknt" is "Ready"
	I1017 20:10:40.091012  481830 pod_ready.go:86] duration metric: took 31.506030142s for pod "coredns-66bc5c9577-6mknt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.096904  481830 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.105631  481830 pod_ready.go:94] pod "etcd-default-k8s-diff-port-740780" is "Ready"
	I1017 20:10:40.105667  481830 pod_ready.go:86] duration metric: took 8.726548ms for pod "etcd-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.110752  481830 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.121523  481830 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-740780" is "Ready"
	I1017 20:10:40.121566  481830 pod_ready.go:86] duration metric: took 10.769406ms for pod "kube-apiserver-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.127012  481830 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.288391  481830 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-740780" is "Ready"
	I1017 20:10:40.288422  481830 pod_ready.go:86] duration metric: took 161.377883ms for pod "kube-controller-manager-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.490232  481830 pod_ready.go:83] waiting for pod "kube-proxy-8x772" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.888013  481830 pod_ready.go:94] pod "kube-proxy-8x772" is "Ready"
	I1017 20:10:40.888047  481830 pod_ready.go:86] duration metric: took 397.78546ms for pod "kube-proxy-8x772" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:41.087822  481830 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:41.488392  481830 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-740780" is "Ready"
	I1017 20:10:41.488467  481830 pod_ready.go:86] duration metric: took 400.614232ms for pod "kube-scheduler-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:41.488496  481830 pod_ready.go:40] duration metric: took 32.911560072s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:10:41.563453  481830 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:10:41.566420  481830 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-740780" cluster and "default" namespace by default
	I1017 20:10:39.825804  483598 addons.go:514] duration metric: took 1.361720083s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 20:10:39.928019  483598 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-804622" context rescaled to 1 replicas
	W1017 20:10:41.429033  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	W1017 20:10:43.928585  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	W1017 20:10:45.929097  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	W1017 20:10:47.929678  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	W1017 20:10:50.428354  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 17 20:10:34 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:34.213084287Z" level=info msg="Removed container cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q/dashboard-metrics-scraper" id=4f51a21f-e76e-47a5-96c9-fb67174e89fe name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:10:37 default-k8s-diff-port-740780 conmon[1146]: conmon 355f42b2d9e5ab8e9cc0 <ninfo>: container 1152 exited with status 1
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.198513153Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5bf35f67-d029-40ff-9b20-d132b362159a name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.199604085Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=033a709a-a5d4-4b27-a58b-880ce5c3c2f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.200457797Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=26153b63-3e9e-41ad-a451-5535e8df2cde name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.200726147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.209653166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.209832288Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b009eec7bd11fc5a27fe1713e37940cb3646cde95c045ec89271d5e511beffc0/merged/etc/passwd: no such file or directory"
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.209863163Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b009eec7bd11fc5a27fe1713e37940cb3646cde95c045ec89271d5e511beffc0/merged/etc/group: no such file or directory"
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.210129568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.243615839Z" level=info msg="Created container a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17: kube-system/storage-provisioner/storage-provisioner" id=26153b63-3e9e-41ad-a451-5535e8df2cde name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.245223303Z" level=info msg="Starting container: a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17" id=de5b67e3-fdc3-4df6-9641-de501fbbf10f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.250231545Z" level=info msg="Started container" PID=1640 containerID=a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17 description=kube-system/storage-provisioner/storage-provisioner id=de5b67e3-fdc3-4df6-9641-de501fbbf10f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3306bff645312adf8def5e71965035b303c2e22027e7206658971e4f6b47cd98
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.955635803Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.963583592Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.963616641Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.963639139Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.966613943Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.966648764Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.966670417Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.969748159Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.969783292Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.969810754Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.972955678Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.97299068Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a159b70cb0ab7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   3306bff645312       storage-provisioner                                    kube-system
	6e976958932ed       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   96969515dc76a       dashboard-metrics-scraper-6ffb444bf9-4ms6q             kubernetes-dashboard
	e477022720326       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   10f5a9fa8e695       kubernetes-dashboard-855c9754f9-rm6kw                  kubernetes-dashboard
	3380c611e12db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   afd23f7b94063       coredns-66bc5c9577-6mknt                               kube-system
	331bf8b9df6dd       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   119af0bbf542b       busybox                                                default
	355f42b2d9e5a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   3306bff645312       storage-provisioner                                    kube-system
	ea08533626e75       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   f9c2817c2e370       kindnet-fnx26                                          kube-system
	cb4d42676d8a4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   296db648a407e       kube-proxy-8x772                                       kube-system
	a6b3e974b27e4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           56 seconds ago      Running             etcd                        1                   ede6389dbe66e       etcd-default-k8s-diff-port-740780                      kube-system
	7c7665546cb77       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           56 seconds ago      Running             kube-scheduler              1                   9ab82f8b777bb       kube-scheduler-default-k8s-diff-port-740780            kube-system
	a6caff4182327       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           56 seconds ago      Running             kube-apiserver              1                   d0f58d59f6f5e       kube-apiserver-default-k8s-diff-port-740780            kube-system
	9b30d2deb9ae5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           56 seconds ago      Running             kube-controller-manager     1                   f5d51c6cfcd54       kube-controller-manager-default-k8s-diff-port-740780   kube-system
	
	
	==> coredns [3380c611e12dbdeaa42525e0a861b568befa9b96d862018967217894b34edf5b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59531 - 33281 "HINFO IN 4691281537781563261.7670465878819384505. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023499334s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-740780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-740780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=default-k8s-diff-port-740780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_08_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:08:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-740780
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:10:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:10:37 +0000   Fri, 17 Oct 2025 20:08:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:10:37 +0000   Fri, 17 Oct 2025 20:08:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:10:37 +0000   Fri, 17 Oct 2025 20:08:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:10:37 +0000   Fri, 17 Oct 2025 20:09:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-740780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                aeaa4ad6-0a8d-467b-bdc0-41bfb9026ea7
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-6mknt                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m16s
	  kube-system                 etcd-default-k8s-diff-port-740780                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m21s
	  kube-system                 kindnet-fnx26                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-740780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-740780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-8x772                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-default-k8s-diff-port-740780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4ms6q              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rm6kw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m14s              kube-proxy       
	  Normal   Starting                 48s                kube-proxy       
	  Normal   Starting                 2m22s              kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m22s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m21s              kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m21s              kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m21s              kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m17s              node-controller  Node default-k8s-diff-port-740780 event: Registered Node default-k8s-diff-port-740780 in Controller
	  Normal   NodeReady                95s                kubelet          Node default-k8s-diff-port-740780 status is now: NodeReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  57s (x8 over 58s)  kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x8 over 58s)  kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x8 over 58s)  kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                node-controller  Node default-k8s-diff-port-740780 event: Registered Node default-k8s-diff-port-740780 in Controller
	
	
	==> dmesg <==
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	[ +30.002292] overlayfs: idmapped layers are currently not supported
	[Oct17 20:08] overlayfs: idmapped layers are currently not supported
	[Oct17 20:09] overlayfs: idmapped layers are currently not supported
	[ +26.726183] overlayfs: idmapped layers are currently not supported
	[ +20.054803] overlayfs: idmapped layers are currently not supported
	[Oct17 20:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a6b3e974b27e414682e8adc3b208f72c9f313b4733f18ab5f560bd7e238be80a] <==
	{"level":"warn","ts":"2025-10-17T20:10:04.146118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.188804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.234056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.293232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.309537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.358718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.387664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.421701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.438913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.513353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.569009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.629876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.682972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.723123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.749816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.776766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.805389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.815961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.841540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.860567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.878996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.906011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.924085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.943035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:05.020749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35042","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:10:56 up  2:53,  0 user,  load average: 3.73, 4.40, 3.46
	Linux default-k8s-diff-port-740780 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea08533626e7592fc61ba304cf97cd8eb64de0494753bde37a8b9d87caeca53f] <==
	I1017 20:10:07.748105       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:10:07.748643       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:10:07.748766       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:10:07.748779       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:10:07.748791       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:10:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:10:07.969297       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:10:07.969324       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:10:07.969334       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:10:07.969656       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:10:37.951453       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:10:37.970021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 20:10:37.970028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 20:10:37.970393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1017 20:10:39.570235       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:10:39.570331       1 metrics.go:72] Registering metrics
	I1017 20:10:39.570407       1 controller.go:711] "Syncing nftables rules"
	I1017 20:10:47.955300       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:10:47.955366       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a6caff41823275ad2cd049c0053ce5ae7602d4c363bc83b1fe7629a564b7ac54] <==
	I1017 20:10:06.369798       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:10:06.369803       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:10:06.378338       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:10:06.378408       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:10:06.393824       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:10:06.393893       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:10:06.403082       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:10:06.403113       1 policy_source.go:240] refreshing policies
	I1017 20:10:06.403950       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:10:06.403974       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 20:10:06.408634       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:10:06.409629       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:10:06.418870       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1017 20:10:06.444141       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:10:06.681850       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:10:06.978935       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:10:07.689700       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:10:07.897330       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:10:07.990866       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:10:08.034084       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:10:08.418590       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.137.89"}
	I1017 20:10:08.506173       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.216.195"}
	I1017 20:10:10.721476       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:10:10.970602       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:10:11.021366       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9b30d2deb9ae5ab342e2a970b00848a001b112b0bfa707783b0702db3735167d] <==
	I1017 20:10:10.505232       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:10:10.505314       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-740780"
	I1017 20:10:10.505390       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:10:10.508276       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:10:10.512778       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:10:10.513144       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:10:10.513270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 20:10:10.513331       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:10:10.514397       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:10:10.514418       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:10:10.514465       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:10:10.515894       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:10:10.518820       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:10:10.521966       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:10:10.526204       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:10:10.529460       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:10:10.529506       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:10:10.531756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:10:10.536029       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:10:10.538318       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:10:10.539552       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:10:10.565063       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:10:10.565165       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:10:10.565195       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:10:10.583764       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cb4d42676d8a4c718ff3906f4fcce605b5ee16ab93b39e0e2482f60b722be015] <==
	I1017 20:10:08.131420       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:10:08.406879       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:10:08.532587       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:10:08.532645       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:10:08.532740       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:10:08.593092       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:10:08.593158       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:10:08.601386       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:10:08.601855       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:10:08.602083       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:10:08.603402       1 config.go:200] "Starting service config controller"
	I1017 20:10:08.603521       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:10:08.603583       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:10:08.603618       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:10:08.603661       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:10:08.603688       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:10:08.607409       1 config.go:309] "Starting node config controller"
	I1017 20:10:08.607475       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:10:08.607507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:10:08.704191       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:10:08.704215       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:10:08.704239       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7c7665546cb77975e68deac4ff243aa42b49d8525c2fc62e721424af6d1e6123] <==
	I1017 20:10:03.747579       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:10:08.363298       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:10:08.363412       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:10:08.397699       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:10:08.397874       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:10:08.397919       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:10:08.397993       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:10:08.399009       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:10:08.399076       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:10:08.399128       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:10:08.399158       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:10:08.498993       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 20:10:08.499187       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:10:08.499252       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:11.318459     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzgp\" (UniqueName: \"kubernetes.io/projected/957b8ab9-0704-4c13-a3ab-a17691e5e2c1-kube-api-access-xkzgp\") pod \"kubernetes-dashboard-855c9754f9-rm6kw\" (UID: \"957b8ab9-0704-4c13-a3ab-a17691e5e2c1\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rm6kw"
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:11.318527     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fe11556f-43a9-447c-922b-805c7a1b3067-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4ms6q\" (UID: \"fe11556f-43a9-447c-922b-805c7a1b3067\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q"
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:11.318554     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5rfx\" (UniqueName: \"kubernetes.io/projected/fe11556f-43a9-447c-922b-805c7a1b3067-kube-api-access-n5rfx\") pod \"dashboard-metrics-scraper-6ffb444bf9-4ms6q\" (UID: \"fe11556f-43a9-447c-922b-805c7a1b3067\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q"
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:11.318574     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/957b8ab9-0704-4c13-a3ab-a17691e5e2c1-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rm6kw\" (UID: \"957b8ab9-0704-4c13-a3ab-a17691e5e2c1\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rm6kw"
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: W1017 20:10:11.569744     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/crio-96969515dc76a36e562c34c6a7ce4521efebeb60097876b91d7768dbae7ed0d0 WatchSource:0}: Error finding container 96969515dc76a36e562c34c6a7ce4521efebeb60097876b91d7768dbae7ed0d0: Status 404 returned error can't find the container with id 96969515dc76a36e562c34c6a7ce4521efebeb60097876b91d7768dbae7ed0d0
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: W1017 20:10:11.591791     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/crio-10f5a9fa8e695d1c2ae81e81ffa67c9e542b255567200e6a387c46d1ad526879 WatchSource:0}: Error finding container 10f5a9fa8e695d1c2ae81e81ffa67c9e542b255567200e6a387c46d1ad526879: Status 404 returned error can't find the container with id 10f5a9fa8e695d1c2ae81e81ffa67c9e542b255567200e6a387c46d1ad526879
	Oct 17 20:10:18 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:18.125316     775 scope.go:117] "RemoveContainer" containerID="3f1c1a63f12001cc6ec5075381d6e60eabedb84bf7b6f990f290bc1296c7e8cd"
	Oct 17 20:10:19 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:19.129050     775 scope.go:117] "RemoveContainer" containerID="3f1c1a63f12001cc6ec5075381d6e60eabedb84bf7b6f990f290bc1296c7e8cd"
	Oct 17 20:10:19 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:19.130030     775 scope.go:117] "RemoveContainer" containerID="cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441"
	Oct 17 20:10:19 default-k8s-diff-port-740780 kubelet[775]: E1017 20:10:19.130198     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ms6q_kubernetes-dashboard(fe11556f-43a9-447c-922b-805c7a1b3067)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q" podUID="fe11556f-43a9-447c-922b-805c7a1b3067"
	Oct 17 20:10:20 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:20.133071     775 scope.go:117] "RemoveContainer" containerID="cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441"
	Oct 17 20:10:20 default-k8s-diff-port-740780 kubelet[775]: E1017 20:10:20.133271     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ms6q_kubernetes-dashboard(fe11556f-43a9-447c-922b-805c7a1b3067)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q" podUID="fe11556f-43a9-447c-922b-805c7a1b3067"
	Oct 17 20:10:21 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:21.524288     775 scope.go:117] "RemoveContainer" containerID="cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441"
	Oct 17 20:10:21 default-k8s-diff-port-740780 kubelet[775]: E1017 20:10:21.524480     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ms6q_kubernetes-dashboard(fe11556f-43a9-447c-922b-805c7a1b3067)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q" podUID="fe11556f-43a9-447c-922b-805c7a1b3067"
	Oct 17 20:10:33 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:33.961874     775 scope.go:117] "RemoveContainer" containerID="cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441"
	Oct 17 20:10:34 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:34.184929     775 scope.go:117] "RemoveContainer" containerID="cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441"
	Oct 17 20:10:34 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:34.185635     775 scope.go:117] "RemoveContainer" containerID="6e976958932ed0a771f2d17bd5b5b8abf05e910444ce5500a110d35836ac6690"
	Oct 17 20:10:34 default-k8s-diff-port-740780 kubelet[775]: E1017 20:10:34.185823     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ms6q_kubernetes-dashboard(fe11556f-43a9-447c-922b-805c7a1b3067)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q" podUID="fe11556f-43a9-447c-922b-805c7a1b3067"
	Oct 17 20:10:34 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:34.209960     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rm6kw" podStartSLOduration=10.050183327 podStartE2EDuration="23.209942891s" podCreationTimestamp="2025-10-17 20:10:11 +0000 UTC" firstStartedPulling="2025-10-17 20:10:11.59760753 +0000 UTC m=+12.935662455" lastFinishedPulling="2025-10-17 20:10:24.757367086 +0000 UTC m=+26.095422019" observedRunningTime="2025-10-17 20:10:25.181623839 +0000 UTC m=+26.519678781" watchObservedRunningTime="2025-10-17 20:10:34.209942891 +0000 UTC m=+35.547997824"
	Oct 17 20:10:38 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:38.197976     775 scope.go:117] "RemoveContainer" containerID="355f42b2d9e5ab8e9cc0398be0c31946c5fd5ef67f1542040bd152dc86fc9eaa"
	Oct 17 20:10:41 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:41.524000     775 scope.go:117] "RemoveContainer" containerID="6e976958932ed0a771f2d17bd5b5b8abf05e910444ce5500a110d35836ac6690"
	Oct 17 20:10:41 default-k8s-diff-port-740780 kubelet[775]: E1017 20:10:41.524726     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ms6q_kubernetes-dashboard(fe11556f-43a9-447c-922b-805c7a1b3067)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q" podUID="fe11556f-43a9-447c-922b-805c7a1b3067"
	Oct 17 20:10:53 default-k8s-diff-port-740780 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:10:53 default-k8s-diff-port-740780 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:10:53 default-k8s-diff-port-740780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e4770227203260bbb9d237b5374c9f19a250d85735e375ebb88ac0f7f39647f1] <==
	2025/10/17 20:10:24 Starting overwatch
	2025/10/17 20:10:24 Using namespace: kubernetes-dashboard
	2025/10/17 20:10:24 Using in-cluster config to connect to apiserver
	2025/10/17 20:10:24 Using secret token for csrf signing
	2025/10/17 20:10:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:10:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:10:24 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:10:24 Generating JWE encryption key
	2025/10/17 20:10:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:10:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:10:25 Initializing JWE encryption key from synchronized object
	2025/10/17 20:10:25 Creating in-cluster Sidecar client
	2025/10/17 20:10:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:10:25 Serving insecurely on HTTP port: 9090
	2025/10/17 20:10:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [355f42b2d9e5ab8e9cc0398be0c31946c5fd5ef67f1542040bd152dc86fc9eaa] <==
	I1017 20:10:07.782555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:10:37.788859       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17] <==
	I1017 20:10:38.264781       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:10:38.293439       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:10:38.293503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:10:38.296805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:41.751924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:46.012583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:49.611467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:52.666034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:55.689395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:55.698946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:10:55.699171       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:10:55.701855       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-740780_935034ff-dc55-4fd7-ad80-c74cd8208d67!
	I1017 20:10:55.702002       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81077895-6eb3-4ab5-abce-e2589ce9b483", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-740780_935034ff-dc55-4fd7-ad80-c74cd8208d67 became leader
	W1017 20:10:55.709508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:55.725949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:10:55.804912       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-740780_935034ff-dc55-4fd7-ad80-c74cd8208d67!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780: exit status 2 (375.049311ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-740780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-740780
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-740780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395",
	        "Created": "2025-10-17T20:08:03.310435059Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482015,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:09:51.104940758Z",
	            "FinishedAt": "2025-10-17T20:09:50.10554158Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/hostname",
	        "HostsPath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/hosts",
	        "LogPath": "/var/lib/docker/containers/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395-json.log",
	        "Name": "/default-k8s-diff-port-740780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-740780:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-740780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395",
	                "LowerDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860-init/diff:/var/lib/docker/overlay2/85f84d5c43bddd27ba14f87c959fff21ca14a6525e571b05794f846c46e870c9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860/merged",
	                "UpperDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860/diff",
	                "WorkDir": "/var/lib/docker/overlay2/280fba353d4fefed83ab3bd7b3798c5b596f4b4c372a4f322e0f6bae68b71860/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-740780",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-740780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-740780",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-740780",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-740780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c474177c3c955da8b4faf22f8c8b3b764d3744ea3ebbff477c861659d934c10c",
	            "SandboxKey": "/var/run/docker/netns/c474177c3c95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-740780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:2e:83:93:38:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b07c93b74eadee92a26c052eb44e638916a69f6583542a7473d7302a377567bf",
	                    "EndpointID": "7aeaf2acfcbf765d3e66830fa317364530db7f447a35c87d2ed1f65ee01cd2bf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-740780",
	                        "fedc9c1ddaae"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780: exit status 2 (372.627493ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-740780 logs -n 25
E1017 20:10:59.199355  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-740780 logs -n 25: (1.269694548s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-413711 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p no-preload-413711                                                                                                                                                                                                                          │ no-preload-413711            │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ delete  │ -p disable-driver-mounts-672422                                                                                                                                                                                                               │ disable-driver-mounts-672422 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:09 UTC │
	│ image   │ embed-certs-572724 image list --format=json                                                                                                                                                                                                   │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ pause   │ -p embed-certs-572724 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │                     │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ delete  │ -p embed-certs-572724                                                                                                                                                                                                                         │ embed-certs-572724           │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:08 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:08 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p newest-cni-718789 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ stop    │ -p newest-cni-718789 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-718789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-740780 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-740780 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ image   │ newest-cni-718789 image list --format=json                                                                                                                                                                                                    │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ pause   │ -p newest-cni-718789 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-740780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:10 UTC │
	│ delete  │ -p newest-cni-718789                                                                                                                                                                                                                          │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ delete  │ -p newest-cni-718789                                                                                                                                                                                                                          │ newest-cni-718789            │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │ 17 Oct 25 20:09 UTC │
	│ start   │ -p auto-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-804622                  │ jenkins │ v1.37.0 │ 17 Oct 25 20:09 UTC │                     │
	│ image   │ default-k8s-diff-port-740780 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │ 17 Oct 25 20:10 UTC │
	│ pause   │ -p default-k8s-diff-port-740780 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-740780 │ jenkins │ v1.37.0 │ 17 Oct 25 20:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:09:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:09:56.838710  483598 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:09:56.839315  483598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:56.839336  483598 out.go:374] Setting ErrFile to fd 2...
	I1017 20:09:56.839359  483598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:09:56.839640  483598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:09:56.840068  483598 out.go:368] Setting JSON to false
	I1017 20:09:56.841062  483598 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10348,"bootTime":1760721449,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:09:56.841154  483598 start.go:141] virtualization:  
	I1017 20:09:56.845132  483598 out.go:179] * [auto-804622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:09:56.849480  483598 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:09:56.849552  483598 notify.go:220] Checking for updates...
	I1017 20:09:56.855631  483598 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:09:56.858780  483598 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:09:56.862217  483598 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:09:56.865259  483598 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:09:56.868296  483598 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:09:56.871810  483598 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:56.871984  483598 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:09:56.911004  483598 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:09:56.911126  483598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:56.997602  483598 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 20:09:56.981181733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:56.997701  483598 docker.go:318] overlay module found
	I1017 20:09:57.000918  483598 out.go:179] * Using the docker driver based on user configuration
	I1017 20:09:57.003825  483598 start.go:305] selected driver: docker
	I1017 20:09:57.003851  483598 start.go:925] validating driver "docker" against <nil>
	I1017 20:09:57.003884  483598 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:09:57.004709  483598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:09:57.091679  483598 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 20:09:57.081451048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:09:57.091914  483598 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:09:57.092157  483598 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:09:57.094982  483598 out.go:179] * Using Docker driver with root privileges
	I1017 20:09:57.097713  483598 cni.go:84] Creating CNI manager for ""
	I1017 20:09:57.097776  483598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:57.097785  483598 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:09:57.097858  483598 start.go:349] cluster config:
	{Name:auto-804622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-804622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1017 20:09:57.101433  483598 out.go:179] * Starting "auto-804622" primary control-plane node in "auto-804622" cluster
	I1017 20:09:57.106413  483598 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:09:57.109445  483598 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:09:57.112278  483598 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:57.112328  483598 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:09:57.112337  483598 cache.go:58] Caching tarball of preloaded images
	I1017 20:09:57.112432  483598 preload.go:233] Found /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:09:57.112441  483598 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:09:57.112570  483598 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/config.json ...
	I1017 20:09:57.112598  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/config.json: {Name:mkc2890a001174a0f307b41e739f2161f812a8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:57.112754  483598 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:09:57.148422  483598 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:09:57.148443  483598 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:09:57.148457  483598 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:09:57.148481  483598 start.go:360] acquireMachinesLock for auto-804622: {Name:mk1c90dcfd99f1024836dbf0db6cd464090d1b6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:09:57.148682  483598 start.go:364] duration metric: took 182.83µs to acquireMachinesLock for "auto-804622"
	I1017 20:09:57.148717  483598 start.go:93] Provisioning new machine with config: &{Name:auto-804622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-804622 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:09:57.148814  483598 start.go:125] createHost starting for "" (driver="docker")
	I1017 20:09:55.755764  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:09:55.773146  481830 provision.go:87] duration metric: took 747.84085ms to configureAuth
	I1017 20:09:55.773172  481830 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:09:55.773362  481830 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:09:55.773479  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:09:55.790344  481830 main.go:141] libmachine: Using SSH client type: native
	I1017 20:09:55.790702  481830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1017 20:09:55.790727  481830 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:09:56.152206  481830 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:09:56.152228  481830 machine.go:96] duration metric: took 4.707206512s to provisionDockerMachine
	I1017 20:09:56.152238  481830 start.go:293] postStartSetup for "default-k8s-diff-port-740780" (driver="docker")
	I1017 20:09:56.152249  481830 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:09:56.152325  481830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:09:56.152368  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:09:56.182425  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:09:56.292764  481830 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:09:56.296199  481830 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:09:56.296225  481830 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:09:56.296237  481830 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:09:56.296290  481830 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:09:56.296368  481830 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:09:56.296473  481830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:09:56.307206  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:09:56.364406  481830 start.go:296] duration metric: took 212.152543ms for postStartSetup
	I1017 20:09:56.364484  481830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:09:56.364577  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:09:56.383676  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:09:56.496993  481830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:09:56.502562  481830 fix.go:56] duration metric: took 5.463006588s for fixHost
	I1017 20:09:56.502590  481830 start.go:83] releasing machines lock for "default-k8s-diff-port-740780", held for 5.463057779s
	I1017 20:09:56.502696  481830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-740780
	I1017 20:09:56.523108  481830 ssh_runner.go:195] Run: cat /version.json
	I1017 20:09:56.523170  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:09:56.523453  481830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:09:56.523513  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:09:56.545531  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:09:56.568612  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:09:56.668115  481830 ssh_runner.go:195] Run: systemctl --version
	I1017 20:09:56.770094  481830 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:09:56.825505  481830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:09:56.830464  481830 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:09:56.830528  481830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:09:56.839184  481830 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:09:56.839209  481830 start.go:495] detecting cgroup driver to use...
	I1017 20:09:56.839239  481830 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:09:56.839285  481830 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:09:56.857024  481830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:09:56.871260  481830 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:09:56.871320  481830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:09:56.889444  481830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:09:56.907405  481830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:09:57.071160  481830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:09:57.213075  481830 docker.go:234] disabling docker service ...
	I1017 20:09:57.213146  481830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:09:57.232346  481830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:09:57.250253  481830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:09:57.404660  481830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:09:57.539223  481830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:09:57.554800  481830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:09:57.575049  481830 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:09:57.575217  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.587213  481830 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:09:57.587299  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.596661  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.605878  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.621062  481830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:09:57.633130  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.643049  481830 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.658478  481830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:09:57.668074  481830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:09:57.681112  481830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:09:57.689788  481830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:57.838194  481830 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:09:58.003167  481830 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:09:58.003273  481830 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:09:58.020814  481830 start.go:563] Will wait 60s for crictl version
	I1017 20:09:58.020878  481830 ssh_runner.go:195] Run: which crictl
	I1017 20:09:58.025670  481830 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:09:58.077946  481830 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:09:58.078046  481830 ssh_runner.go:195] Run: crio --version
	I1017 20:09:58.130993  481830 ssh_runner.go:195] Run: crio --version
	I1017 20:09:58.175256  481830 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:09:58.178109  481830 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-740780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:09:58.219053  481830 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 20:09:58.224446  481830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:09:58.239451  481830 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:09:58.239571  481830 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:58.239624  481830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:09:58.279809  481830 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:09:58.279835  481830 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:09:58.279890  481830 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:09:58.324949  481830 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:09:58.324974  481830 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:09:58.324981  481830 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1017 20:09:58.325071  481830 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-740780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:09:58.325146  481830 ssh_runner.go:195] Run: crio config
	I1017 20:09:58.429652  481830 cni.go:84] Creating CNI manager for ""
	I1017 20:09:58.429676  481830 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:09:58.429700  481830 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:09:58.429722  481830 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-740780 NodeName:default-k8s-diff-port-740780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:09:58.429858  481830 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-740780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:09:58.429930  481830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:09:58.437859  481830 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:09:58.437935  481830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:09:58.445440  481830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 20:09:58.458452  481830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:09:58.470996  481830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1017 20:09:58.484149  481830 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:09:58.488053  481830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:09:58.497261  481830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:09:58.641285  481830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:09:58.657024  481830 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780 for IP: 192.168.76.2
	I1017 20:09:58.657045  481830 certs.go:195] generating shared ca certs ...
	I1017 20:09:58.657061  481830 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:09:58.657199  481830 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:09:58.657248  481830 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:09:58.657259  481830 certs.go:257] generating profile certs ...
	I1017 20:09:58.657353  481830 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.key
	I1017 20:09:58.657420  481830 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key.79d0c2c9
	I1017 20:09:58.657470  481830 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key
	I1017 20:09:58.657574  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:09:58.657612  481830 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:09:58.657628  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:09:58.657657  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:09:58.657682  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:09:58.657712  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:09:58.657755  481830 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:09:58.658321  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:09:58.721621  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:09:58.762422  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:09:58.805695  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:09:58.856503  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:09:58.910502  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:09:58.950292  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:09:58.978158  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:09:58.997125  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:09:59.017323  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:09:59.050013  481830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:09:59.075595  481830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:09:59.088136  481830 ssh_runner.go:195] Run: openssl version
	I1017 20:09:59.094986  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:09:59.103318  481830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:09:59.107197  481830 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:09:59.107306  481830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:09:59.158844  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:09:59.168103  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:09:59.177338  481830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:09:59.182329  481830 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:09:59.182458  481830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:09:59.228784  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:09:59.239706  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:09:59.251840  481830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:59.256974  481830 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:59.257102  481830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:09:59.304863  481830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:09:59.314591  481830 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:09:59.319648  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:09:59.370217  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:09:59.488378  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:09:59.572408  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:09:59.679625  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:09:59.813954  481830 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:09:59.942856  481830 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-740780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-740780 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:09:59.943015  481830 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:09:59.943107  481830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:10:00.194336  481830 cri.go:89] found id: "a6b3e974b27e414682e8adc3b208f72c9f313b4733f18ab5f560bd7e238be80a"
	I1017 20:10:00.194425  481830 cri.go:89] found id: "7c7665546cb77975e68deac4ff243aa42b49d8525c2fc62e721424af6d1e6123"
	I1017 20:10:00.194478  481830 cri.go:89] found id: "a6caff41823275ad2cd049c0053ce5ae7602d4c363bc83b1fe7629a564b7ac54"
	I1017 20:10:00.194497  481830 cri.go:89] found id: "9b30d2deb9ae5ab342e2a970b00848a001b112b0bfa707783b0702db3735167d"
	I1017 20:10:00.194538  481830 cri.go:89] found id: ""
	I1017 20:10:00.194675  481830 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:10:00.312956  481830 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:10:00Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:10:00.313180  481830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:10:00.397419  481830 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:10:00.397502  481830 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:10:00.397605  481830 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:10:00.454981  481830 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:10:00.455455  481830 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-740780" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:10:00.455614  481830 kubeconfig.go:62] /home/jenkins/minikube-integration/21753-257739/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-740780" cluster setting kubeconfig missing "default-k8s-diff-port-740780" context setting]
	I1017 20:10:00.455972  481830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:00.457838  481830 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:10:00.484391  481830 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 20:10:00.484495  481830 kubeadm.go:601] duration metric: took 86.961245ms to restartPrimaryControlPlane
	I1017 20:10:00.484544  481830 kubeadm.go:402] duration metric: took 541.711264ms to StartCluster
	I1017 20:10:00.484586  481830 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:00.484708  481830 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:10:00.485457  481830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:00.486043  481830 config.go:182] Loaded profile config "default-k8s-diff-port-740780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:10:00.486162  481830 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:10:00.486256  481830 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-740780"
	I1017 20:10:00.486273  481830 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-740780"
	W1017 20:10:00.486287  481830 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:10:00.486320  481830 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:10:00.487076  481830 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:10:00.487267  481830 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:10:00.487690  481830 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-740780"
	I1017 20:10:00.487711  481830 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-740780"
	W1017 20:10:00.487725  481830 addons.go:247] addon dashboard should already be in state true
	I1017 20:10:00.487761  481830 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:10:00.488231  481830 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:10:00.488802  481830 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-740780"
	I1017 20:10:00.488834  481830 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-740780"
	I1017 20:10:00.489152  481830 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:10:00.500064  481830 out.go:179] * Verifying Kubernetes components...
	I1017 20:10:00.540941  481830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:00.562660  481830 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 20:10:00.562731  481830 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:10:00.563748  481830 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-740780"
	W1017 20:10:00.563767  481830 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:10:00.563794  481830 host.go:66] Checking if "default-k8s-diff-port-740780" exists ...
	I1017 20:10:00.564229  481830 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-740780 --format={{.State.Status}}
	I1017 20:10:00.572643  481830 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:00.572674  481830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:10:00.572755  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:10:00.576119  481830 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 20:10:00.579338  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 20:10:00.579365  481830 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 20:10:00.579455  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:10:00.607704  481830 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:00.607728  481830 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:10:00.607798  481830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-740780
	I1017 20:10:00.644272  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:10:00.656830  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:10:00.668803  481830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/default-k8s-diff-port-740780/id_rsa Username:docker}
	I1017 20:09:57.152295  483598 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 20:09:57.152667  483598 start.go:159] libmachine.API.Create for "auto-804622" (driver="docker")
	I1017 20:09:57.152725  483598 client.go:168] LocalClient.Create starting
	I1017 20:09:57.152826  483598 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem
	I1017 20:09:57.152868  483598 main.go:141] libmachine: Decoding PEM data...
	I1017 20:09:57.152887  483598 main.go:141] libmachine: Parsing certificate...
	I1017 20:09:57.152958  483598 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem
	I1017 20:09:57.152985  483598 main.go:141] libmachine: Decoding PEM data...
	I1017 20:09:57.152998  483598 main.go:141] libmachine: Parsing certificate...
	I1017 20:09:57.153387  483598 cli_runner.go:164] Run: docker network inspect auto-804622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 20:09:57.175377  483598 cli_runner.go:211] docker network inspect auto-804622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 20:09:57.175452  483598 network_create.go:284] running [docker network inspect auto-804622] to gather additional debugging logs...
	I1017 20:09:57.175470  483598 cli_runner.go:164] Run: docker network inspect auto-804622
	W1017 20:09:57.203828  483598 cli_runner.go:211] docker network inspect auto-804622 returned with exit code 1
	I1017 20:09:57.203856  483598 network_create.go:287] error running [docker network inspect auto-804622]: docker network inspect auto-804622: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-804622 not found
	I1017 20:09:57.203870  483598 network_create.go:289] output of [docker network inspect auto-804622]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-804622 not found
	
	** /stderr **
	I1017 20:09:57.203972  483598 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:09:57.233942  483598 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f667d9c3ea2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:fc:1d:c6:d2:da} reservation:<nil>}
	I1017 20:09:57.234211  483598 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-82a22734829b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:22:5a:78:c5:e0:0a} reservation:<nil>}
	I1017 20:09:57.234560  483598 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b88bd3b523f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:75:74:cd:15:9b} reservation:<nil>}
	I1017 20:09:57.234848  483598 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b07c93b74ead IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:cc:0a:13:a9:64} reservation:<nil>}
	I1017 20:09:57.235258  483598 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cd820}
	I1017 20:09:57.235276  483598 network_create.go:124] attempt to create docker network auto-804622 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1017 20:09:57.235328  483598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-804622 auto-804622
	I1017 20:09:57.312098  483598 network_create.go:108] docker network auto-804622 192.168.85.0/24 created
	I1017 20:09:57.312134  483598 kic.go:121] calculated static IP "192.168.85.2" for the "auto-804622" container
	I1017 20:09:57.312202  483598 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 20:09:57.336932  483598 cli_runner.go:164] Run: docker volume create auto-804622 --label name.minikube.sigs.k8s.io=auto-804622 --label created_by.minikube.sigs.k8s.io=true
	I1017 20:09:57.353217  483598 oci.go:103] Successfully created a docker volume auto-804622
	I1017 20:09:57.353289  483598 cli_runner.go:164] Run: docker run --rm --name auto-804622-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-804622 --entrypoint /usr/bin/test -v auto-804622:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 20:09:58.027826  483598 oci.go:107] Successfully prepared a docker volume auto-804622
	I1017 20:09:58.027863  483598 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:09:58.027882  483598 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 20:09:58.027949  483598 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-804622:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 20:10:01.011593  481830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:01.100730  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 20:10:01.100849  481830 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 20:10:01.224576  481830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:01.230249  481830 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:10:01.268379  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 20:10:01.268402  481830 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 20:10:01.358546  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 20:10:01.358569  481830 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 20:10:01.508947  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 20:10:01.508969  481830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 20:10:01.590724  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 20:10:01.590803  481830 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 20:10:01.627613  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 20:10:01.627691  481830 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 20:10:01.742721  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 20:10:01.742813  481830 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 20:10:01.784602  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 20:10:01.784678  481830 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 20:10:01.821183  481830 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:10:01.821260  481830 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 20:10:01.869194  481830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 20:10:03.447792  483598 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-804622:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.419807248s)
	I1017 20:10:03.447837  483598 kic.go:203] duration metric: took 5.419939249s to extract preloaded images to volume ...
	W1017 20:10:03.447965  483598 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 20:10:03.448072  483598 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 20:10:03.556000  483598 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-804622 --name auto-804622 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-804622 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-804622 --network auto-804622 --ip 192.168.85.2 --volume auto-804622:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 20:10:03.932411  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Running}}
	I1017 20:10:03.954511  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:03.984976  483598 cli_runner.go:164] Run: docker exec auto-804622 stat /var/lib/dpkg/alternatives/iptables
	I1017 20:10:04.060117  483598 oci.go:144] the created container "auto-804622" has a running status.
	I1017 20:10:04.060153  483598 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa...
	I1017 20:10:04.244246  483598 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 20:10:04.269370  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:04.299623  483598 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 20:10:04.299647  483598 kic_runner.go:114] Args: [docker exec --privileged auto-804622 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 20:10:04.369832  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:04.397317  483598 machine.go:93] provisionDockerMachine start ...
	I1017 20:10:04.397434  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:04.421394  483598 main.go:141] libmachine: Using SSH client type: native
	I1017 20:10:04.421730  483598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1017 20:10:04.421740  483598 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:10:04.422493  483598 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33878->127.0.0.1:33460: read: connection reset by peer
	I1017 20:10:08.368503  481830 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.356821294s)
	I1017 20:10:08.368571  481830 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.13825533s)
	I1017 20:10:08.368602  481830 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-740780" to be "Ready" ...
	I1017 20:10:08.368933  481830 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.144284155s)
	I1017 20:10:08.442035  481830 node_ready.go:49] node "default-k8s-diff-port-740780" is "Ready"
	I1017 20:10:08.442113  481830 node_ready.go:38] duration metric: took 73.498143ms for node "default-k8s-diff-port-740780" to be "Ready" ...
	I1017 20:10:08.442142  481830 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:10:08.442238  481830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:10:08.513321  481830 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.643994627s)
	I1017 20:10:08.513585  481830 api_server.go:72] duration metric: took 8.026258589s to wait for apiserver process to appear ...
	I1017 20:10:08.513644  481830 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:10:08.513677  481830 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1017 20:10:08.516440  481830 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-740780 addons enable metrics-server
	
	I1017 20:10:08.519350  481830 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1017 20:10:08.522255  481830 addons.go:514] duration metric: took 8.036066093s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1017 20:10:08.526204  481830 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1017 20:10:08.528006  481830 api_server.go:141] control plane version: v1.34.1
	I1017 20:10:08.528027  481830 api_server.go:131] duration metric: took 14.363459ms to wait for apiserver health ...
	I1017 20:10:08.528035  481830 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:10:08.532822  481830 system_pods.go:59] 8 kube-system pods found
	I1017 20:10:08.532903  481830 system_pods.go:61] "coredns-66bc5c9577-6mknt" [15647d52-61fb-4af6-8d28-66da6ebd0923] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:10:08.532929  481830 system_pods.go:61] "etcd-default-k8s-diff-port-740780" [6a636316-c994-44d8-b608-0c1cfa06bd55] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:10:08.532951  481830 system_pods.go:61] "kindnet-fnx26" [16e1d707-7d88-4317-ab9f-dd7698ee1cd1] Running
	I1017 20:10:08.532985  481830 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-740780" [7e36f4e9-953c-457d-b6bf-b26ac987ab87] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:10:08.533009  481830 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-740780" [9e5bfd14-bb31-4668-a9db-6278ca49ae54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:10:08.533031  481830 system_pods.go:61] "kube-proxy-8x772" [19f55ff7-64eb-4407-9168-aa18ddbe543c] Running
	I1017 20:10:08.533062  481830 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-740780" [44223246-1f61-4365-98a5-c3820458e28a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:10:08.533081  481830 system_pods.go:61] "storage-provisioner" [f0266236-3025-407f-ae0f-c4e9e5ae8ff0] Running
	I1017 20:10:08.533104  481830 system_pods.go:74] duration metric: took 5.063034ms to wait for pod list to return data ...
	I1017 20:10:08.533134  481830 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:10:08.536010  481830 default_sa.go:45] found service account: "default"
	I1017 20:10:08.536079  481830 default_sa.go:55] duration metric: took 2.92651ms for default service account to be created ...
	I1017 20:10:08.536103  481830 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:10:08.539843  481830 system_pods.go:86] 8 kube-system pods found
	I1017 20:10:08.539937  481830 system_pods.go:89] "coredns-66bc5c9577-6mknt" [15647d52-61fb-4af6-8d28-66da6ebd0923] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:10:08.539971  481830 system_pods.go:89] "etcd-default-k8s-diff-port-740780" [6a636316-c994-44d8-b608-0c1cfa06bd55] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:10:08.540006  481830 system_pods.go:89] "kindnet-fnx26" [16e1d707-7d88-4317-ab9f-dd7698ee1cd1] Running
	I1017 20:10:08.540029  481830 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-740780" [7e36f4e9-953c-457d-b6bf-b26ac987ab87] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:10:08.540062  481830 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-740780" [9e5bfd14-bb31-4668-a9db-6278ca49ae54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:10:08.540092  481830 system_pods.go:89] "kube-proxy-8x772" [19f55ff7-64eb-4407-9168-aa18ddbe543c] Running
	I1017 20:10:08.540116  481830 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-740780" [44223246-1f61-4365-98a5-c3820458e28a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:10:08.540136  481830 system_pods.go:89] "storage-provisioner" [f0266236-3025-407f-ae0f-c4e9e5ae8ff0] Running
	I1017 20:10:08.540169  481830 system_pods.go:126] duration metric: took 4.046881ms to wait for k8s-apps to be running ...
	I1017 20:10:08.540190  481830 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:10:08.540267  481830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:10:08.560287  481830 system_svc.go:56] duration metric: took 20.087185ms WaitForService to wait for kubelet
	I1017 20:10:08.560362  481830 kubeadm.go:586] duration metric: took 8.073035536s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:10:08.560399  481830 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:10:08.563785  481830 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:10:08.563848  481830 node_conditions.go:123] node cpu capacity is 2
	I1017 20:10:08.563885  481830 node_conditions.go:105] duration metric: took 3.462839ms to run NodePressure ...
	I1017 20:10:08.563914  481830 start.go:241] waiting for startup goroutines ...
	I1017 20:10:08.563944  481830 start.go:246] waiting for cluster config update ...
	I1017 20:10:08.563969  481830 start.go:255] writing updated cluster config ...
	I1017 20:10:08.564303  481830 ssh_runner.go:195] Run: rm -f paused
	I1017 20:10:08.576907  481830 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:10:08.584960  481830 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6mknt" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:10:10.602945  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:07.616382  483598 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-804622
	
	I1017 20:10:07.616456  483598 ubuntu.go:182] provisioning hostname "auto-804622"
	I1017 20:10:07.616556  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:07.639674  483598 main.go:141] libmachine: Using SSH client type: native
	I1017 20:10:07.639984  483598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1017 20:10:07.639995  483598 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-804622 && echo "auto-804622" | sudo tee /etc/hostname
	I1017 20:10:07.848287  483598 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-804622
	
	I1017 20:10:07.848417  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:07.876391  483598 main.go:141] libmachine: Using SSH client type: native
	I1017 20:10:07.876769  483598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1017 20:10:07.876790  483598 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-804622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-804622/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-804622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:10:08.053186  483598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:10:08.053215  483598 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21753-257739/.minikube CaCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21753-257739/.minikube}
	I1017 20:10:08.053272  483598 ubuntu.go:190] setting up certificates
	I1017 20:10:08.053283  483598 provision.go:84] configureAuth start
	I1017 20:10:08.053369  483598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-804622
	I1017 20:10:08.093086  483598 provision.go:143] copyHostCerts
	I1017 20:10:08.093162  483598 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem, removing ...
	I1017 20:10:08.093178  483598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem
	I1017 20:10:08.093252  483598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/ca.pem (1082 bytes)
	I1017 20:10:08.093352  483598 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem, removing ...
	I1017 20:10:08.093363  483598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem
	I1017 20:10:08.093393  483598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/cert.pem (1123 bytes)
	I1017 20:10:08.093464  483598 exec_runner.go:144] found /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem, removing ...
	I1017 20:10:08.093475  483598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem
	I1017 20:10:08.093501  483598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21753-257739/.minikube/key.pem (1675 bytes)
	I1017 20:10:08.093562  483598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem org=jenkins.auto-804622 san=[127.0.0.1 192.168.85.2 auto-804622 localhost minikube]
	I1017 20:10:08.571761  483598 provision.go:177] copyRemoteCerts
	I1017 20:10:08.571834  483598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:10:08.571883  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:08.599920  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:08.705603  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:10:08.723989  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1017 20:10:08.745559  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:10:08.766772  483598 provision.go:87] duration metric: took 713.460394ms to configureAuth
	I1017 20:10:08.766800  483598 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:10:08.766985  483598 config.go:182] Loaded profile config "auto-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:10:08.767101  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:08.787352  483598 main.go:141] libmachine: Using SSH client type: native
	I1017 20:10:08.787739  483598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33460 <nil> <nil>}
	I1017 20:10:08.787757  483598 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:10:09.054326  483598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:10:09.054408  483598 machine.go:96] duration metric: took 4.657060083s to provisionDockerMachine
	I1017 20:10:09.054441  483598 client.go:171] duration metric: took 11.901704674s to LocalClient.Create
	I1017 20:10:09.054495  483598 start.go:167] duration metric: took 11.901830136s to libmachine.API.Create "auto-804622"
	I1017 20:10:09.054523  483598 start.go:293] postStartSetup for "auto-804622" (driver="docker")
	I1017 20:10:09.054551  483598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:10:09.054654  483598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:10:09.054731  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:09.075470  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:09.185528  483598 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:10:09.190482  483598 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:10:09.190513  483598 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:10:09.190524  483598 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/addons for local assets ...
	I1017 20:10:09.190581  483598 filesync.go:126] Scanning /home/jenkins/minikube-integration/21753-257739/.minikube/files for local assets ...
	I1017 20:10:09.190663  483598 filesync.go:149] local asset: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem -> 2595962.pem in /etc/ssl/certs
	I1017 20:10:09.190765  483598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:10:09.200651  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:10:09.222990  483598 start.go:296] duration metric: took 168.434432ms for postStartSetup
	I1017 20:10:09.223365  483598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-804622
	I1017 20:10:09.246198  483598 profile.go:143] Saving config to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/config.json ...
	I1017 20:10:09.246489  483598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:10:09.246544  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:09.278376  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:09.390494  483598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:10:09.395463  483598 start.go:128] duration metric: took 12.246633406s to createHost
	I1017 20:10:09.395490  483598 start.go:83] releasing machines lock for "auto-804622", held for 12.246794559s
	I1017 20:10:09.395570  483598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-804622
	I1017 20:10:09.415545  483598 ssh_runner.go:195] Run: cat /version.json
	I1017 20:10:09.415610  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:09.415888  483598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:10:09.415949  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:09.436656  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:09.444809  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:09.540452  483598 ssh_runner.go:195] Run: systemctl --version
	I1017 20:10:09.633451  483598 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:10:09.674603  483598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:10:09.679374  483598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:10:09.679468  483598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:10:09.711973  483598 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 20:10:09.711997  483598 start.go:495] detecting cgroup driver to use...
	I1017 20:10:09.712032  483598 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:10:09.712087  483598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:10:09.731298  483598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:10:09.743984  483598 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:10:09.744099  483598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:10:09.768843  483598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:10:09.797081  483598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:10:09.934284  483598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:10:10.075750  483598 docker.go:234] disabling docker service ...
	I1017 20:10:10.075880  483598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:10:10.113984  483598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:10:10.129734  483598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:10:10.275530  483598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:10:10.405168  483598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:10:10.426617  483598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:10:10.447190  483598 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:10:10.447294  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.457453  483598 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:10:10.457591  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.477563  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.494244  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.505740  483598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:10:10.518959  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.528770  483598 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.543216  483598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:10:10.552807  483598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:10:10.560345  483598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:10:10.568064  483598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:10.700878  483598 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:10:10.836357  483598 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:10:10.836427  483598 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:10:10.841097  483598 start.go:563] Will wait 60s for crictl version
	I1017 20:10:10.841206  483598 ssh_runner.go:195] Run: which crictl
	I1017 20:10:10.844752  483598 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:10:10.872059  483598 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:10:10.872215  483598 ssh_runner.go:195] Run: crio --version
	I1017 20:10:10.902009  483598 ssh_runner.go:195] Run: crio --version
	I1017 20:10:10.937547  483598 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:10:10.940454  483598 cli_runner.go:164] Run: docker network inspect auto-804622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:10:10.956318  483598 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 20:10:10.961556  483598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:10:10.980682  483598 kubeadm.go:883] updating cluster {Name:auto-804622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-804622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:10:10.980800  483598 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:10:10.980860  483598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:10:11.023983  483598 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:10:11.024011  483598 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:10:11.024068  483598 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:10:11.052266  483598 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:10:11.052294  483598 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:10:11.052304  483598 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 20:10:11.052453  483598 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-804622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-804622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:10:11.052570  483598 ssh_runner.go:195] Run: crio config
	I1017 20:10:11.128399  483598 cni.go:84] Creating CNI manager for ""
	I1017 20:10:11.128426  483598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:10:11.128444  483598 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:10:11.128490  483598 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-804622 NodeName:auto-804622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:10:11.128724  483598 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-804622"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:10:11.128808  483598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:10:11.141489  483598 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:10:11.141568  483598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:10:11.150433  483598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1017 20:10:11.164794  483598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:10:11.180991  483598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1017 20:10:11.210609  483598 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:10:11.214568  483598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:10:11.231554  483598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:11.366734  483598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:10:11.384339  483598 certs.go:69] Setting up /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622 for IP: 192.168.85.2
	I1017 20:10:11.384409  483598 certs.go:195] generating shared ca certs ...
	I1017 20:10:11.384456  483598 certs.go:227] acquiring lock for ca certs: {Name:mk60c0cb3b8ac6045aafd2d7b4c6ccf245cce3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:11.384673  483598 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key
	I1017 20:10:11.384761  483598 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key
	I1017 20:10:11.384786  483598 certs.go:257] generating profile certs ...
	I1017 20:10:11.384860  483598 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.key
	I1017 20:10:11.384899  483598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.crt with IP's: []
	I1017 20:10:11.634971  483598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.crt ...
	I1017 20:10:11.635050  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.crt: {Name:mk17d77eb2a35743ef5ae244f9ae9da67a7eeb56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:11.635286  483598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.key ...
	I1017 20:10:11.635323  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.key: {Name:mk01f927dbb1ecf78c0d4b86082e14a79ab64245 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:11.635464  483598 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key.77a2ba55
	I1017 20:10:11.635507  483598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt.77a2ba55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 20:10:12.587127  483598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt.77a2ba55 ...
	I1017 20:10:12.587200  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt.77a2ba55: {Name:mk0c077d35bd5a3ed6e2edf2bd8d9c1937b551f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:12.587394  483598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key.77a2ba55 ...
	I1017 20:10:12.587435  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key.77a2ba55: {Name:mkb73a1db540eb0cb0001ef06da90f6bb834a09a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:12.587546  483598 certs.go:382] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt.77a2ba55 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt
	I1017 20:10:12.587663  483598 certs.go:386] copying /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key.77a2ba55 -> /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key
	I1017 20:10:12.587769  483598 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.key
	I1017 20:10:12.587832  483598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.crt with IP's: []
	I1017 20:10:12.806386  483598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.crt ...
	I1017 20:10:12.806458  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.crt: {Name:mk087fbb1670990a7ad9f61450044d9c39ce1004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:12.806677  483598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.key ...
	I1017 20:10:12.806713  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.key: {Name:mk128f7fb01dfc3b3add3970a0996453a29ad62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:12.806944  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem (1338 bytes)
	W1017 20:10:12.807010  483598 certs.go:480] ignoring /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596_empty.pem, impossibly tiny 0 bytes
	I1017 20:10:12.807042  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:10:12.807089  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:10:12.807146  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:10:12.807192  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/certs/key.pem (1675 bytes)
	I1017 20:10:12.807270  483598 certs.go:484] found cert: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem (1708 bytes)
	I1017 20:10:12.807932  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:10:12.841392  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1017 20:10:12.864067  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:10:12.888939  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1017 20:10:12.914271  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1017 20:10:12.938373  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:10:12.963437  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:10:12.987152  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:10:13.012641  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/certs/259596.pem --> /usr/share/ca-certificates/259596.pem (1338 bytes)
	I1017 20:10:13.037465  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/ssl/certs/2595962.pem --> /usr/share/ca-certificates/2595962.pem (1708 bytes)
	I1017 20:10:13.067944  483598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21753-257739/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:10:13.102859  483598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:10:13.120868  483598 ssh_runner.go:195] Run: openssl version
	I1017 20:10:13.130063  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259596.pem && ln -fs /usr/share/ca-certificates/259596.pem /etc/ssl/certs/259596.pem"
	I1017 20:10:13.141067  483598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259596.pem
	I1017 20:10:13.150532  483598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:03 /usr/share/ca-certificates/259596.pem
	I1017 20:10:13.150654  483598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259596.pem
	I1017 20:10:13.197054  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259596.pem /etc/ssl/certs/51391683.0"
	I1017 20:10:13.208061  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2595962.pem && ln -fs /usr/share/ca-certificates/2595962.pem /etc/ssl/certs/2595962.pem"
	I1017 20:10:13.218755  483598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2595962.pem
	I1017 20:10:13.223447  483598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:03 /usr/share/ca-certificates/2595962.pem
	I1017 20:10:13.223517  483598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2595962.pem
	I1017 20:10:13.273768  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2595962.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:10:13.296226  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:10:13.312365  483598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:10:13.324956  483598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:10:13.325040  483598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:10:13.410860  483598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:10:13.426533  483598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:10:13.431183  483598 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 20:10:13.431239  483598 kubeadm.go:400] StartCluster: {Name:auto-804622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-804622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:10:13.431322  483598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:10:13.431386  483598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:10:13.479632  483598 cri.go:89] found id: ""
	I1017 20:10:13.479718  483598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:10:13.497269  483598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:10:13.510771  483598 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 20:10:13.510835  483598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:10:13.520125  483598 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 20:10:13.520196  483598 kubeadm.go:157] found existing configuration files:
	
	I1017 20:10:13.520280  483598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 20:10:13.534637  483598 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 20:10:13.534779  483598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 20:10:13.547079  483598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 20:10:13.556956  483598 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 20:10:13.557072  483598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:10:13.564870  483598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 20:10:13.574365  483598 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 20:10:13.574494  483598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:10:13.582698  483598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 20:10:13.597219  483598 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 20:10:13.597348  483598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:10:13.605909  483598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 20:10:13.663458  483598 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 20:10:13.663901  483598 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 20:10:13.694724  483598 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 20:10:13.694886  483598 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 20:10:13.694965  483598 kubeadm.go:318] OS: Linux
	I1017 20:10:13.695056  483598 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 20:10:13.695139  483598 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 20:10:13.695224  483598 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 20:10:13.695310  483598 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 20:10:13.695396  483598 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 20:10:13.695486  483598 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 20:10:13.695610  483598 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 20:10:13.695695  483598 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 20:10:13.695781  483598 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 20:10:13.790691  483598 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 20:10:13.790879  483598 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 20:10:13.791015  483598 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 20:10:13.800909  483598 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1017 20:10:13.100144  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:15.591766  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:13.808096  483598 out.go:252]   - Generating certificates and keys ...
	I1017 20:10:13.808293  483598 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 20:10:13.808399  483598 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 20:10:14.278206  483598 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 20:10:14.548085  483598 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 20:10:14.712660  483598 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 20:10:15.942607  483598 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1017 20:10:17.591875  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:19.592891  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:17.686018  483598 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 20:10:17.686644  483598 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-804622 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:10:18.058357  483598 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 20:10:18.059073  483598 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-804622 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 20:10:18.594021  483598 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 20:10:18.848323  483598 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 20:10:19.192894  483598 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 20:10:19.193058  483598 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 20:10:19.376921  483598 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 20:10:19.771813  483598 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 20:10:21.122360  483598 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 20:10:22.652641  483598 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 20:10:23.389128  483598 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 20:10:23.390076  483598 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 20:10:23.393391  483598 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1017 20:10:21.593220  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:23.594611  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:25.595758  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:23.398826  483598 out.go:252]   - Booting up control plane ...
	I1017 20:10:23.398937  483598 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 20:10:23.399019  483598 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 20:10:23.399263  483598 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 20:10:23.436258  483598 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 20:10:23.436463  483598 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 20:10:23.446158  483598 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 20:10:23.446373  483598 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 20:10:23.446458  483598 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 20:10:23.617482  483598 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 20:10:23.617689  483598 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 20:10:25.120850  483598 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501931561s
	I1017 20:10:25.123302  483598 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 20:10:25.123706  483598 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1017 20:10:25.124076  483598 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 20:10:25.125053  483598 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1017 20:10:28.090729  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:30.096207  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:29.343008  483598 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.217459766s
	I1017 20:10:30.412362  483598 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.286566295s
	I1017 20:10:32.126545  483598 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002057342s
	I1017 20:10:32.146970  483598 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 20:10:32.161792  483598 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 20:10:32.176821  483598 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 20:10:32.177065  483598 kubeadm.go:318] [mark-control-plane] Marking the node auto-804622 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 20:10:32.188825  483598 kubeadm.go:318] [bootstrap-token] Using token: arqy1z.6dykx1ylb9hfjatw
	I1017 20:10:32.191931  483598 out.go:252]   - Configuring RBAC rules ...
	I1017 20:10:32.192060  483598 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 20:10:32.196316  483598 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 20:10:32.205820  483598 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 20:10:32.212670  483598 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 20:10:32.219939  483598 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 20:10:32.225874  483598 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 20:10:32.534499  483598 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 20:10:32.976613  483598 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 20:10:33.532901  483598 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 20:10:33.534300  483598 kubeadm.go:318] 
	I1017 20:10:33.534384  483598 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 20:10:33.534395  483598 kubeadm.go:318] 
	I1017 20:10:33.534476  483598 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 20:10:33.534480  483598 kubeadm.go:318] 
	I1017 20:10:33.534507  483598 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 20:10:33.534569  483598 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 20:10:33.534622  483598 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 20:10:33.534627  483598 kubeadm.go:318] 
	I1017 20:10:33.534691  483598 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 20:10:33.534696  483598 kubeadm.go:318] 
	I1017 20:10:33.534756  483598 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 20:10:33.534761  483598 kubeadm.go:318] 
	I1017 20:10:33.534815  483598 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 20:10:33.534893  483598 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 20:10:33.534964  483598 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 20:10:33.534969  483598 kubeadm.go:318] 
	I1017 20:10:33.535056  483598 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 20:10:33.535137  483598 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 20:10:33.535141  483598 kubeadm.go:318] 
	I1017 20:10:33.535229  483598 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token arqy1z.6dykx1ylb9hfjatw \
	I1017 20:10:33.535336  483598 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 \
	I1017 20:10:33.535359  483598 kubeadm.go:318] 	--control-plane 
	I1017 20:10:33.535364  483598 kubeadm.go:318] 
	I1017 20:10:33.535452  483598 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 20:10:33.535457  483598 kubeadm.go:318] 
	I1017 20:10:33.535542  483598 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token arqy1z.6dykx1ylb9hfjatw \
	I1017 20:10:33.535648  483598 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c173d402364ab96a1b06270520df77fdd46158f58d9973521bd5c66c234b9578 
	I1017 20:10:33.540597  483598 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 20:10:33.540839  483598 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 20:10:33.540952  483598 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 20:10:33.540975  483598 cni.go:84] Creating CNI manager for ""
	I1017 20:10:33.540987  483598 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:10:33.544258  483598 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1017 20:10:32.590961  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:35.090992  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:33.547050  483598 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:10:33.552655  483598 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:10:33.552675  483598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:10:33.577088  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:10:33.996115  483598 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:10:33.996247  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:33.996325  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-804622 minikube.k8s.io/updated_at=2025_10_17T20_10_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d minikube.k8s.io/name=auto-804622 minikube.k8s.io/primary=true
	I1017 20:10:34.331211  483598 ops.go:34] apiserver oom_adj: -16
	I1017 20:10:34.331316  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:34.831681  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:35.331970  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:35.832065  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:36.332337  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:36.831469  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:37.332116  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:37.831558  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:38.331826  483598 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 20:10:38.462461  483598 kubeadm.go:1113] duration metric: took 4.466260221s to wait for elevateKubeSystemPrivileges
	I1017 20:10:38.462493  483598 kubeadm.go:402] duration metric: took 25.031257355s to StartCluster
	I1017 20:10:38.462513  483598 settings.go:142] acquiring lock: {Name:mk5db554fbe4e892747888080684192e7459b2fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:38.462600  483598 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:10:38.463542  483598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21753-257739/kubeconfig: {Name:mk9e9d8b595e8938a6556cc275d9b943b6c6fd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:10:38.463764  483598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 20:10:38.463770  483598 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:10:38.464025  483598 config.go:182] Loaded profile config "auto-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:10:38.464072  483598 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:10:38.464136  483598 addons.go:69] Setting storage-provisioner=true in profile "auto-804622"
	I1017 20:10:38.464151  483598 addons.go:238] Setting addon storage-provisioner=true in "auto-804622"
	I1017 20:10:38.464180  483598 host.go:66] Checking if "auto-804622" exists ...
	I1017 20:10:38.464643  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:38.465033  483598 addons.go:69] Setting default-storageclass=true in profile "auto-804622"
	I1017 20:10:38.465050  483598 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-804622"
	I1017 20:10:38.465310  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:38.468691  483598 out.go:179] * Verifying Kubernetes components...
	I1017 20:10:38.476746  483598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:10:38.506532  483598 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:10:38.507961  483598 addons.go:238] Setting addon default-storageclass=true in "auto-804622"
	I1017 20:10:38.507993  483598 host.go:66] Checking if "auto-804622" exists ...
	I1017 20:10:38.508394  483598 cli_runner.go:164] Run: docker container inspect auto-804622 --format={{.State.Status}}
	I1017 20:10:38.510635  483598 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:38.510654  483598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:10:38.510715  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:38.533710  483598 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:38.533730  483598 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:10:38.533794  483598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-804622
	I1017 20:10:38.555675  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:38.566309  483598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/auto-804622/id_rsa Username:docker}
	I1017 20:10:38.766225  483598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 20:10:38.841105  483598 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:10:38.927636  483598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:10:38.943830  483598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:10:39.423715  483598 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1017 20:10:39.425731  483598 node_ready.go:35] waiting up to 15m0s for node "auto-804622" to be "Ready" ...
	I1017 20:10:39.822803  483598 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1017 20:10:37.590247  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	W1017 20:10:39.590566  481830 pod_ready.go:104] pod "coredns-66bc5c9577-6mknt" is not "Ready", error: <nil>
	I1017 20:10:40.090980  481830 pod_ready.go:94] pod "coredns-66bc5c9577-6mknt" is "Ready"
	I1017 20:10:40.091012  481830 pod_ready.go:86] duration metric: took 31.506030142s for pod "coredns-66bc5c9577-6mknt" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.096904  481830 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.105631  481830 pod_ready.go:94] pod "etcd-default-k8s-diff-port-740780" is "Ready"
	I1017 20:10:40.105667  481830 pod_ready.go:86] duration metric: took 8.726548ms for pod "etcd-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.110752  481830 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.121523  481830 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-740780" is "Ready"
	I1017 20:10:40.121566  481830 pod_ready.go:86] duration metric: took 10.769406ms for pod "kube-apiserver-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.127012  481830 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.288391  481830 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-740780" is "Ready"
	I1017 20:10:40.288422  481830 pod_ready.go:86] duration metric: took 161.377883ms for pod "kube-controller-manager-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.490232  481830 pod_ready.go:83] waiting for pod "kube-proxy-8x772" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:40.888013  481830 pod_ready.go:94] pod "kube-proxy-8x772" is "Ready"
	I1017 20:10:40.888047  481830 pod_ready.go:86] duration metric: took 397.78546ms for pod "kube-proxy-8x772" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:41.087822  481830 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:41.488392  481830 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-740780" is "Ready"
	I1017 20:10:41.488467  481830 pod_ready.go:86] duration metric: took 400.614232ms for pod "kube-scheduler-default-k8s-diff-port-740780" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:10:41.488496  481830 pod_ready.go:40] duration metric: took 32.911560072s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:10:41.563453  481830 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:10:41.566420  481830 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-740780" cluster and "default" namespace by default
	I1017 20:10:39.825804  483598 addons.go:514] duration metric: took 1.361720083s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 20:10:39.928019  483598 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-804622" context rescaled to 1 replicas
	W1017 20:10:41.429033  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	W1017 20:10:43.928585  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	W1017 20:10:45.929097  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	W1017 20:10:47.929678  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	W1017 20:10:50.428354  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	W1017 20:10:52.429512  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	W1017 20:10:54.929126  483598 node_ready.go:57] node "auto-804622" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 17 20:10:34 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:34.213084287Z" level=info msg="Removed container cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q/dashboard-metrics-scraper" id=4f51a21f-e76e-47a5-96c9-fb67174e89fe name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:10:37 default-k8s-diff-port-740780 conmon[1146]: conmon 355f42b2d9e5ab8e9cc0 <ninfo>: container 1152 exited with status 1
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.198513153Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5bf35f67-d029-40ff-9b20-d132b362159a name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.199604085Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=033a709a-a5d4-4b27-a58b-880ce5c3c2f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.200457797Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=26153b63-3e9e-41ad-a451-5535e8df2cde name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.200726147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.209653166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.209832288Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b009eec7bd11fc5a27fe1713e37940cb3646cde95c045ec89271d5e511beffc0/merged/etc/passwd: no such file or directory"
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.209863163Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b009eec7bd11fc5a27fe1713e37940cb3646cde95c045ec89271d5e511beffc0/merged/etc/group: no such file or directory"
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.210129568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.243615839Z" level=info msg="Created container a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17: kube-system/storage-provisioner/storage-provisioner" id=26153b63-3e9e-41ad-a451-5535e8df2cde name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.245223303Z" level=info msg="Starting container: a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17" id=de5b67e3-fdc3-4df6-9641-de501fbbf10f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:10:38 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:38.250231545Z" level=info msg="Started container" PID=1640 containerID=a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17 description=kube-system/storage-provisioner/storage-provisioner id=de5b67e3-fdc3-4df6-9641-de501fbbf10f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3306bff645312adf8def5e71965035b303c2e22027e7206658971e4f6b47cd98
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.955635803Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.963583592Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.963616641Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.963639139Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.966613943Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.966648764Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.966670417Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.969748159Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.969783292Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.969810754Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.972955678Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:10:47 default-k8s-diff-port-740780 crio[652]: time="2025-10-17T20:10:47.97299068Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a159b70cb0ab7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   3306bff645312       storage-provisioner                                    kube-system
	6e976958932ed       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   96969515dc76a       dashboard-metrics-scraper-6ffb444bf9-4ms6q             kubernetes-dashboard
	e477022720326       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago      Running             kubernetes-dashboard        0                   10f5a9fa8e695       kubernetes-dashboard-855c9754f9-rm6kw                  kubernetes-dashboard
	3380c611e12db       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   afd23f7b94063       coredns-66bc5c9577-6mknt                               kube-system
	331bf8b9df6dd       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   119af0bbf542b       busybox                                                default
	355f42b2d9e5a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   3306bff645312       storage-provisioner                                    kube-system
	ea08533626e75       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   f9c2817c2e370       kindnet-fnx26                                          kube-system
	cb4d42676d8a4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   296db648a407e       kube-proxy-8x772                                       kube-system
	a6b3e974b27e4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   ede6389dbe66e       etcd-default-k8s-diff-port-740780                      kube-system
	7c7665546cb77       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   9ab82f8b777bb       kube-scheduler-default-k8s-diff-port-740780            kube-system
	a6caff4182327       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   d0f58d59f6f5e       kube-apiserver-default-k8s-diff-port-740780            kube-system
	9b30d2deb9ae5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   f5d51c6cfcd54       kube-controller-manager-default-k8s-diff-port-740780   kube-system
	
	
	==> coredns [3380c611e12dbdeaa42525e0a861b568befa9b96d862018967217894b34edf5b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59531 - 33281 "HINFO IN 4691281537781563261.7670465878819384505. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023499334s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-740780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-740780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73a80cc9bc99174c010556d98400e9fa16adda9d
	                    minikube.k8s.io/name=default-k8s-diff-port-740780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_08_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:08:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-740780
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:10:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:10:37 +0000   Fri, 17 Oct 2025 20:08:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:10:37 +0000   Fri, 17 Oct 2025 20:08:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:10:37 +0000   Fri, 17 Oct 2025 20:08:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:10:37 +0000   Fri, 17 Oct 2025 20:09:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-740780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                aeaa4ad6-0a8d-467b-bdc0-41bfb9026ea7
	  Boot ID:                    c25e6f84-8fc8-4b4e-86be-c1f6c81779b5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-6mknt                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-740780                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-fnx26                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-740780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-740780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-8x772                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-740780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4ms6q              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rm6kw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m17s              kube-proxy       
	  Normal   Starting                 50s                kube-proxy       
	  Normal   Starting                 2m24s              kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m24s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m23s              kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s              kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m23s              kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m19s              node-controller  Node default-k8s-diff-port-740780 event: Registered Node default-k8s-diff-port-740780 in Controller
	  Normal   NodeReady                97s                kubelet          Node default-k8s-diff-port-740780 status is now: NodeReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 60s)  kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 60s)  kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 60s)  kubelet          Node default-k8s-diff-port-740780 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                node-controller  Node default-k8s-diff-port-740780 event: Registered Node default-k8s-diff-port-740780 in Controller
	
	
	==> dmesg <==
	[ +43.697346] overlayfs: idmapped layers are currently not supported
	[Oct17 19:48] overlayfs: idmapped layers are currently not supported
	[Oct17 19:49] overlayfs: idmapped layers are currently not supported
	[ +26.194162] overlayfs: idmapped layers are currently not supported
	[Oct17 19:50] overlayfs: idmapped layers are currently not supported
	[Oct17 19:52] overlayfs: idmapped layers are currently not supported
	[Oct17 19:54] overlayfs: idmapped layers are currently not supported
	[Oct17 19:55] overlayfs: idmapped layers are currently not supported
	[Oct17 19:56] overlayfs: idmapped layers are currently not supported
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:01] overlayfs: idmapped layers are currently not supported
	[ +29.873287] overlayfs: idmapped layers are currently not supported
	[Oct17 20:02] overlayfs: idmapped layers are currently not supported
	[ +29.827785] overlayfs: idmapped layers are currently not supported
	[Oct17 20:03] overlayfs: idmapped layers are currently not supported
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:06] overlayfs: idmapped layers are currently not supported
	[Oct17 20:07] overlayfs: idmapped layers are currently not supported
	[ +30.002292] overlayfs: idmapped layers are currently not supported
	[Oct17 20:08] overlayfs: idmapped layers are currently not supported
	[Oct17 20:09] overlayfs: idmapped layers are currently not supported
	[ +26.726183] overlayfs: idmapped layers are currently not supported
	[ +20.054803] overlayfs: idmapped layers are currently not supported
	[Oct17 20:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a6b3e974b27e414682e8adc3b208f72c9f313b4733f18ab5f560bd7e238be80a] <==
	{"level":"warn","ts":"2025-10-17T20:10:04.146118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.188804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.234056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.293232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.309537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.358718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.387664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.421701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.438913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.513353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.569009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.629876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.682972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.723123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.749816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.776766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.805389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.815961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.841540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.860567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.878996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.906011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.924085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:04.943035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:10:05.020749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35042","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:10:59 up  2:53,  0 user,  load average: 4.07, 4.46, 3.48
	Linux default-k8s-diff-port-740780 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea08533626e7592fc61ba304cf97cd8eb64de0494753bde37a8b9d87caeca53f] <==
	I1017 20:10:07.748105       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:10:07.748643       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:10:07.748766       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:10:07.748779       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:10:07.748791       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:10:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:10:07.969297       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:10:07.969324       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:10:07.969334       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:10:07.969656       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:10:37.951453       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:10:37.970021       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 20:10:37.970028       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 20:10:37.970393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1017 20:10:39.570235       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:10:39.570331       1 metrics.go:72] Registering metrics
	I1017 20:10:39.570407       1 controller.go:711] "Syncing nftables rules"
	I1017 20:10:47.955300       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:10:47.955366       1 main.go:301] handling current node
	I1017 20:10:57.960256       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:10:57.960291       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a6caff41823275ad2cd049c0053ce5ae7602d4c363bc83b1fe7629a564b7ac54] <==
	I1017 20:10:06.369798       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:10:06.369803       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:10:06.378338       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:10:06.378408       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:10:06.393824       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:10:06.393893       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:10:06.403082       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:10:06.403113       1 policy_source.go:240] refreshing policies
	I1017 20:10:06.403950       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:10:06.403974       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 20:10:06.408634       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:10:06.409629       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:10:06.418870       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1017 20:10:06.444141       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:10:06.681850       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:10:06.978935       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:10:07.689700       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 20:10:07.897330       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:10:07.990866       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:10:08.034084       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:10:08.418590       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.137.89"}
	I1017 20:10:08.506173       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.216.195"}
	I1017 20:10:10.721476       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:10:10.970602       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:10:11.021366       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9b30d2deb9ae5ab342e2a970b00848a001b112b0bfa707783b0702db3735167d] <==
	I1017 20:10:10.505232       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:10:10.505314       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-740780"
	I1017 20:10:10.505390       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:10:10.508276       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:10:10.512778       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:10:10.513144       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:10:10.513270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 20:10:10.513331       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:10:10.514397       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:10:10.514418       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:10:10.514465       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:10:10.515894       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:10:10.518820       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:10:10.521966       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:10:10.526204       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:10:10.529460       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:10:10.529506       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:10:10.531756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:10:10.536029       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:10:10.538318       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:10:10.539552       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 20:10:10.565063       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:10:10.565165       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:10:10.565195       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:10:10.583764       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cb4d42676d8a4c718ff3906f4fcce605b5ee16ab93b39e0e2482f60b722be015] <==
	I1017 20:10:08.131420       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:10:08.406879       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:10:08.532587       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:10:08.532645       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:10:08.532740       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:10:08.593092       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:10:08.593158       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:10:08.601386       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:10:08.601855       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:10:08.602083       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:10:08.603402       1 config.go:200] "Starting service config controller"
	I1017 20:10:08.603521       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:10:08.603583       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:10:08.603618       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:10:08.603661       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:10:08.603688       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:10:08.607409       1 config.go:309] "Starting node config controller"
	I1017 20:10:08.607475       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:10:08.607507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:10:08.704191       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:10:08.704215       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:10:08.704239       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7c7665546cb77975e68deac4ff243aa42b49d8525c2fc62e721424af6d1e6123] <==
	I1017 20:10:03.747579       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:10:08.363298       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:10:08.363412       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:10:08.397699       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:10:08.397874       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:10:08.397919       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:10:08.397993       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:10:08.399009       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:10:08.399076       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:10:08.399128       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:10:08.399158       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:10:08.498993       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 20:10:08.499187       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:10:08.499252       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:11.318459     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzgp\" (UniqueName: \"kubernetes.io/projected/957b8ab9-0704-4c13-a3ab-a17691e5e2c1-kube-api-access-xkzgp\") pod \"kubernetes-dashboard-855c9754f9-rm6kw\" (UID: \"957b8ab9-0704-4c13-a3ab-a17691e5e2c1\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rm6kw"
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:11.318527     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fe11556f-43a9-447c-922b-805c7a1b3067-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4ms6q\" (UID: \"fe11556f-43a9-447c-922b-805c7a1b3067\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q"
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:11.318554     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5rfx\" (UniqueName: \"kubernetes.io/projected/fe11556f-43a9-447c-922b-805c7a1b3067-kube-api-access-n5rfx\") pod \"dashboard-metrics-scraper-6ffb444bf9-4ms6q\" (UID: \"fe11556f-43a9-447c-922b-805c7a1b3067\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q"
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:11.318574     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/957b8ab9-0704-4c13-a3ab-a17691e5e2c1-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rm6kw\" (UID: \"957b8ab9-0704-4c13-a3ab-a17691e5e2c1\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rm6kw"
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: W1017 20:10:11.569744     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/crio-96969515dc76a36e562c34c6a7ce4521efebeb60097876b91d7768dbae7ed0d0 WatchSource:0}: Error finding container 96969515dc76a36e562c34c6a7ce4521efebeb60097876b91d7768dbae7ed0d0: Status 404 returned error can't find the container with id 96969515dc76a36e562c34c6a7ce4521efebeb60097876b91d7768dbae7ed0d0
	Oct 17 20:10:11 default-k8s-diff-port-740780 kubelet[775]: W1017 20:10:11.591791     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fedc9c1ddaae094c67a12d1fab5b5223b661aae8dc03afe80a645aa16d765395/crio-10f5a9fa8e695d1c2ae81e81ffa67c9e542b255567200e6a387c46d1ad526879 WatchSource:0}: Error finding container 10f5a9fa8e695d1c2ae81e81ffa67c9e542b255567200e6a387c46d1ad526879: Status 404 returned error can't find the container with id 10f5a9fa8e695d1c2ae81e81ffa67c9e542b255567200e6a387c46d1ad526879
	Oct 17 20:10:18 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:18.125316     775 scope.go:117] "RemoveContainer" containerID="3f1c1a63f12001cc6ec5075381d6e60eabedb84bf7b6f990f290bc1296c7e8cd"
	Oct 17 20:10:19 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:19.129050     775 scope.go:117] "RemoveContainer" containerID="3f1c1a63f12001cc6ec5075381d6e60eabedb84bf7b6f990f290bc1296c7e8cd"
	Oct 17 20:10:19 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:19.130030     775 scope.go:117] "RemoveContainer" containerID="cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441"
	Oct 17 20:10:19 default-k8s-diff-port-740780 kubelet[775]: E1017 20:10:19.130198     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ms6q_kubernetes-dashboard(fe11556f-43a9-447c-922b-805c7a1b3067)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q" podUID="fe11556f-43a9-447c-922b-805c7a1b3067"
	Oct 17 20:10:20 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:20.133071     775 scope.go:117] "RemoveContainer" containerID="cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441"
	Oct 17 20:10:20 default-k8s-diff-port-740780 kubelet[775]: E1017 20:10:20.133271     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ms6q_kubernetes-dashboard(fe11556f-43a9-447c-922b-805c7a1b3067)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q" podUID="fe11556f-43a9-447c-922b-805c7a1b3067"
	Oct 17 20:10:21 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:21.524288     775 scope.go:117] "RemoveContainer" containerID="cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441"
	Oct 17 20:10:21 default-k8s-diff-port-740780 kubelet[775]: E1017 20:10:21.524480     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ms6q_kubernetes-dashboard(fe11556f-43a9-447c-922b-805c7a1b3067)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q" podUID="fe11556f-43a9-447c-922b-805c7a1b3067"
	Oct 17 20:10:33 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:33.961874     775 scope.go:117] "RemoveContainer" containerID="cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441"
	Oct 17 20:10:34 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:34.184929     775 scope.go:117] "RemoveContainer" containerID="cfcc4ac34cdab08ebe73bbd94e6de4343ad52fd37f9840a185fc6f1f13c06441"
	Oct 17 20:10:34 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:34.185635     775 scope.go:117] "RemoveContainer" containerID="6e976958932ed0a771f2d17bd5b5b8abf05e910444ce5500a110d35836ac6690"
	Oct 17 20:10:34 default-k8s-diff-port-740780 kubelet[775]: E1017 20:10:34.185823     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ms6q_kubernetes-dashboard(fe11556f-43a9-447c-922b-805c7a1b3067)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q" podUID="fe11556f-43a9-447c-922b-805c7a1b3067"
	Oct 17 20:10:34 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:34.209960     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rm6kw" podStartSLOduration=10.050183327 podStartE2EDuration="23.209942891s" podCreationTimestamp="2025-10-17 20:10:11 +0000 UTC" firstStartedPulling="2025-10-17 20:10:11.59760753 +0000 UTC m=+12.935662455" lastFinishedPulling="2025-10-17 20:10:24.757367086 +0000 UTC m=+26.095422019" observedRunningTime="2025-10-17 20:10:25.181623839 +0000 UTC m=+26.519678781" watchObservedRunningTime="2025-10-17 20:10:34.209942891 +0000 UTC m=+35.547997824"
	Oct 17 20:10:38 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:38.197976     775 scope.go:117] "RemoveContainer" containerID="355f42b2d9e5ab8e9cc0398be0c31946c5fd5ef67f1542040bd152dc86fc9eaa"
	Oct 17 20:10:41 default-k8s-diff-port-740780 kubelet[775]: I1017 20:10:41.524000     775 scope.go:117] "RemoveContainer" containerID="6e976958932ed0a771f2d17bd5b5b8abf05e910444ce5500a110d35836ac6690"
	Oct 17 20:10:41 default-k8s-diff-port-740780 kubelet[775]: E1017 20:10:41.524726     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ms6q_kubernetes-dashboard(fe11556f-43a9-447c-922b-805c7a1b3067)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ms6q" podUID="fe11556f-43a9-447c-922b-805c7a1b3067"
	Oct 17 20:10:53 default-k8s-diff-port-740780 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:10:53 default-k8s-diff-port-740780 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:10:53 default-k8s-diff-port-740780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e4770227203260bbb9d237b5374c9f19a250d85735e375ebb88ac0f7f39647f1] <==
	2025/10/17 20:10:24 Using namespace: kubernetes-dashboard
	2025/10/17 20:10:24 Using in-cluster config to connect to apiserver
	2025/10/17 20:10:24 Using secret token for csrf signing
	2025/10/17 20:10:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 20:10:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 20:10:24 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 20:10:24 Generating JWE encryption key
	2025/10/17 20:10:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 20:10:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 20:10:25 Initializing JWE encryption key from synchronized object
	2025/10/17 20:10:25 Creating in-cluster Sidecar client
	2025/10/17 20:10:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:10:25 Serving insecurely on HTTP port: 9090
	2025/10/17 20:10:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 20:10:24 Starting overwatch
	
	
	==> storage-provisioner [355f42b2d9e5ab8e9cc0398be0c31946c5fd5ef67f1542040bd152dc86fc9eaa] <==
	I1017 20:10:07.782555       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 20:10:37.788859       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a159b70cb0ab7f408b26017316bda6e688ef0df499dfafaeb05cb122b5fb6b17] <==
	I1017 20:10:38.264781       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:10:38.293439       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:10:38.293503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:10:38.296805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:41.751924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:46.012583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:49.611467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:52.666034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:55.689395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:55.698946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:10:55.699171       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:10:55.701855       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-740780_935034ff-dc55-4fd7-ad80-c74cd8208d67!
	I1017 20:10:55.702002       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81077895-6eb3-4ab5-abce-e2589ce9b483", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-740780_935034ff-dc55-4fd7-ad80-c74cd8208d67 became leader
	W1017 20:10:55.709508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:55.725949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:10:55.804912       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-740780_935034ff-dc55-4fd7-ad80-c74cd8208d67!
	W1017 20:10:57.729279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:10:57.736420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780: exit status 2 (340.623537ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-740780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.58s)
E1017 20:16:40.482510  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:16:44.395212  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:16:48.231509  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (252/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.46
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 6.06
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.8
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 169.49
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.93
48 TestAddons/StoppedEnableDisable 12.48
49 TestCertOptions 38.22
50 TestCertExpiration 246.59
52 TestForceSystemdFlag 45.07
53 TestForceSystemdEnv 45.93
59 TestErrorSpam/setup 34.31
60 TestErrorSpam/start 0.84
61 TestErrorSpam/status 1.07
62 TestErrorSpam/pause 6.28
63 TestErrorSpam/unpause 5.31
64 TestErrorSpam/stop 1.51
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 82.35
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29.05
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
76 TestFunctional/serial/CacheCmd/cache/add_local 1.09
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.81
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 36.06
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.46
87 TestFunctional/serial/LogsFileCmd 1.48
88 TestFunctional/serial/InvalidService 3.99
90 TestFunctional/parallel/ConfigCmd 0.58
91 TestFunctional/parallel/DashboardCmd 11.14
92 TestFunctional/parallel/DryRun 0.63
93 TestFunctional/parallel/InternationalLanguage 0.23
94 TestFunctional/parallel/StatusCmd 1.03
99 TestFunctional/parallel/AddonsCmd 0.19
100 TestFunctional/parallel/PersistentVolumeClaim 26.02
102 TestFunctional/parallel/SSHCmd 0.72
103 TestFunctional/parallel/CpCmd 2.32
105 TestFunctional/parallel/FileSync 0.36
106 TestFunctional/parallel/CertSync 2.25
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
114 TestFunctional/parallel/License 0.32
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 1.27
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
122 TestFunctional/parallel/ImageCommands/Setup 0.69
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
133 TestFunctional/parallel/ProfileCmd/profile_list 0.51
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.65
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.33
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/parallel/MountCmd/any-port 7.14
149 TestFunctional/parallel/MountCmd/specific-port 1.95
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.29
151 TestFunctional/parallel/ServiceCmd/List 1.36
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.4
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 199.98
164 TestMultiControlPlane/serial/DeployApp 6.43
165 TestMultiControlPlane/serial/PingHostFromPods 1.5
166 TestMultiControlPlane/serial/AddWorkerNode 59.9
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.28
169 TestMultiControlPlane/serial/CopyFile 19.59
170 TestMultiControlPlane/serial/StopSecondaryNode 12.86
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
172 TestMultiControlPlane/serial/RestartSecondaryNode 31.35
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
185 TestJSONOutput/start/Command 80.58
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.84
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 41.6
211 TestKicCustomNetwork/use_default_bridge_network 37.36
212 TestKicExistingNetwork 35.5
213 TestKicCustomSubnet 39.95
214 TestKicStaticIP 37.98
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 73.5
219 TestMountStart/serial/StartWithMountFirst 9.53
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 6.76
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 7.63
227 TestMountStart/serial/VerifyMountPostStop 0.29
230 TestMultiNode/serial/FreshStart2Nodes 138.36
231 TestMultiNode/serial/DeployApp2Nodes 4.35
232 TestMultiNode/serial/PingHostFrom2Pods 1.12
233 TestMultiNode/serial/AddNode 58.4
234 TestMultiNode/serial/MultiNodeLabels 0.08
235 TestMultiNode/serial/ProfileList 0.85
236 TestMultiNode/serial/CopyFile 10.37
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 7.89
239 TestMultiNode/serial/RestartKeepsNodes 73.91
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 24.09
242 TestMultiNode/serial/RestartMultiNode 48.05
243 TestMultiNode/serial/ValidateNameConflict 43.06
248 TestPreload 135.71
250 TestScheduledStopUnix 113.46
253 TestInsufficientStorage 11.78
254 TestRunningBinaryUpgrade 53.4
256 TestKubernetesUpgrade 361.26
257 TestMissingContainerUpgrade 118.42
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 47.68
261 TestNoKubernetes/serial/StartWithStopK8s 8.6
262 TestNoKubernetes/serial/Start 10.93
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
264 TestNoKubernetes/serial/ProfileList 0.78
265 TestNoKubernetes/serial/Stop 1.29
266 TestNoKubernetes/serial/StartNoArgs 7.09
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
268 TestStoppedBinaryUpgrade/Setup 0.73
269 TestStoppedBinaryUpgrade/Upgrade 59.49
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.45
279 TestPause/serial/Start 79.19
280 TestPause/serial/SecondStartNoReconfiguration 101.07
289 TestNetworkPlugins/group/false 4.9
294 TestStartStop/group/old-k8s-version/serial/FirstStart 62.2
295 TestStartStop/group/old-k8s-version/serial/DeployApp 8.5
297 TestStartStop/group/old-k8s-version/serial/Stop 12.01
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
299 TestStartStop/group/old-k8s-version/serial/SecondStart 48.29
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
305 TestStartStop/group/no-preload/serial/FirstStart 80.4
307 TestStartStop/group/embed-certs/serial/FirstStart 86.35
308 TestStartStop/group/no-preload/serial/DeployApp 9.33
310 TestStartStop/group/no-preload/serial/Stop 12.09
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
312 TestStartStop/group/no-preload/serial/SecondStart 27.7
313 TestStartStop/group/embed-certs/serial/DeployApp 10.41
315 TestStartStop/group/embed-certs/serial/Stop 12.32
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
318 TestStartStop/group/embed-certs/serial/SecondStart 52.55
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.35
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.43
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
329 TestStartStop/group/newest-cni/serial/FirstStart 37.32
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.46
331 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/Stop 1.35
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
335 TestStartStop/group/newest-cni/serial/SecondStart 16.58
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.41
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.3
344 TestNetworkPlugins/group/auto/Start 86.38
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
349 TestNetworkPlugins/group/kindnet/Start 81.04
350 TestNetworkPlugins/group/auto/KubeletFlags 0.4
351 TestNetworkPlugins/group/auto/NetCatPod 11.37
352 TestNetworkPlugins/group/auto/DNS 0.23
353 TestNetworkPlugins/group/auto/Localhost 0.18
354 TestNetworkPlugins/group/auto/HairPin 0.19
355 TestNetworkPlugins/group/calico/Start 65.22
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.33
359 TestNetworkPlugins/group/kindnet/DNS 0.25
360 TestNetworkPlugins/group/kindnet/Localhost 0.17
361 TestNetworkPlugins/group/kindnet/HairPin 0.26
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.36
364 TestNetworkPlugins/group/custom-flannel/Start 69.05
365 TestNetworkPlugins/group/calico/NetCatPod 12.39
366 TestNetworkPlugins/group/calico/DNS 0.21
367 TestNetworkPlugins/group/calico/Localhost 0.17
368 TestNetworkPlugins/group/calico/HairPin 0.16
369 TestNetworkPlugins/group/enable-default-cni/Start 78.16
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.32
372 TestNetworkPlugins/group/custom-flannel/DNS 0.16
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
375 TestNetworkPlugins/group/flannel/Start 63.96
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
381 TestNetworkPlugins/group/bridge/Start 74.62
382 TestNetworkPlugins/group/flannel/ControllerPod 5.03
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.55
384 TestNetworkPlugins/group/flannel/NetCatPod 11.31
385 TestNetworkPlugins/group/flannel/DNS 0.25
386 TestNetworkPlugins/group/flannel/Localhost 0.17
387 TestNetworkPlugins/group/flannel/HairPin 0.18
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
389 TestNetworkPlugins/group/bridge/NetCatPod 10.28
390 TestNetworkPlugins/group/bridge/DNS 0.16
391 TestNetworkPlugins/group/bridge/Localhost 0.14
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (6.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-068460 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-068460 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.460148411s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1017 18:56:37.328668  259596 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1017 18:56:37.328753  259596 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-068460
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-068460: exit status 85 (94.103178ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-068460 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-068460 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:30.913049  259601 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:30.913169  259601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:30.913179  259601 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:30.913185  259601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:30.913457  259601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	W1017 18:56:30.913598  259601 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21753-257739/.minikube/config/config.json: open /home/jenkins/minikube-integration/21753-257739/.minikube/config/config.json: no such file or directory
	I1017 18:56:30.913986  259601 out.go:368] Setting JSON to true
	I1017 18:56:30.914774  259601 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5942,"bootTime":1760721449,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 18:56:30.914837  259601 start.go:141] virtualization:  
	I1017 18:56:30.918669  259601 out.go:99] [download-only-068460] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1017 18:56:30.918833  259601 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball: no such file or directory
	I1017 18:56:30.918949  259601 notify.go:220] Checking for updates...
	I1017 18:56:30.922486  259601 out.go:171] MINIKUBE_LOCATION=21753
	I1017 18:56:30.925707  259601 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:30.928692  259601 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 18:56:30.931643  259601 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 18:56:30.934725  259601 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1017 18:56:30.940321  259601 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 18:56:30.940645  259601 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:30.962040  259601 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 18:56:30.962154  259601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:31.019535  259601 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-17 18:56:31.00966295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 18:56:31.019647  259601 docker.go:318] overlay module found
	I1017 18:56:31.022695  259601 out.go:99] Using the docker driver based on user configuration
	I1017 18:56:31.022742  259601 start.go:305] selected driver: docker
	I1017 18:56:31.022749  259601 start.go:925] validating driver "docker" against <nil>
	I1017 18:56:31.022872  259601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:31.079299  259601 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-17 18:56:31.069563073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 18:56:31.079469  259601 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:31.079818  259601 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1017 18:56:31.079976  259601 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 18:56:31.083052  259601 out.go:171] Using Docker driver with root privileges
	I1017 18:56:31.086192  259601 cni.go:84] Creating CNI manager for ""
	I1017 18:56:31.086287  259601 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:56:31.086300  259601 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 18:56:31.086388  259601 start.go:349] cluster config:
	{Name:download-only-068460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-068460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:56:31.089549  259601 out.go:99] Starting "download-only-068460" primary control-plane node in "download-only-068460" cluster
	I1017 18:56:31.089585  259601 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 18:56:31.092647  259601 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1017 18:56:31.092696  259601 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 18:56:31.092799  259601 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 18:56:31.109214  259601 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 18:56:31.109428  259601 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 18:56:31.109531  259601 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 18:56:31.156400  259601 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1017 18:56:31.156435  259601 cache.go:58] Caching tarball of preloaded images
	I1017 18:56:31.156634  259601 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 18:56:31.160124  259601 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1017 18:56:31.160160  259601 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1017 18:56:31.249406  259601 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1017 18:56:31.249549  259601 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-068460 host does not exist
	  To start a cluster, run: "minikube start -p download-only-068460"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-068460
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (6.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-290584 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-290584 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.064095094s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (6.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1017 18:56:43.831725  259596 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1017 18:56:43.831763  259596 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-290584
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-290584: exit status 85 (88.388976ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-068460 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-068460 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-068460                                                                                                                                                   │ download-only-068460 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │ 17 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-290584 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-290584 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:37
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:37.809168  259800 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:37.809283  259800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:37.809294  259800 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:37.809299  259800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:37.809534  259800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 18:56:37.809944  259800 out.go:368] Setting JSON to true
	I1017 18:56:37.810730  259800 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5949,"bootTime":1760721449,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 18:56:37.810795  259800 start.go:141] virtualization:  
	I1017 18:56:37.814114  259800 out.go:99] [download-only-290584] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 18:56:37.814383  259800 notify.go:220] Checking for updates...
	I1017 18:56:37.818408  259800 out.go:171] MINIKUBE_LOCATION=21753
	I1017 18:56:37.821364  259800 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:37.824309  259800 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 18:56:37.827288  259800 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 18:56:37.830139  259800 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1017 18:56:37.835789  259800 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 18:56:37.836053  259800 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:37.868312  259800 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 18:56:37.868434  259800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:37.925108  259800 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-17 18:56:37.915706687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 18:56:37.925220  259800 docker.go:318] overlay module found
	I1017 18:56:37.928387  259800 out.go:99] Using the docker driver based on user configuration
	I1017 18:56:37.928438  259800 start.go:305] selected driver: docker
	I1017 18:56:37.928453  259800 start.go:925] validating driver "docker" against <nil>
	I1017 18:56:37.928601  259800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 18:56:37.981077  259800 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-17 18:56:37.972095061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 18:56:37.981238  259800 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:37.981531  259800 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1017 18:56:37.981685  259800 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 18:56:37.984814  259800 out.go:171] Using Docker driver with root privileges
	I1017 18:56:37.987630  259800 cni.go:84] Creating CNI manager for ""
	I1017 18:56:37.987704  259800 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 18:56:37.987722  259800 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 18:56:37.987800  259800 start.go:349] cluster config:
	{Name:download-only-290584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-290584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:56:37.990681  259800 out.go:99] Starting "download-only-290584" primary control-plane node in "download-only-290584" cluster
	I1017 18:56:37.990716  259800 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 18:56:37.993520  259800 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1017 18:56:37.993553  259800 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:37.993655  259800 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 18:56:38.010910  259800 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 18:56:38.011081  259800 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 18:56:38.011110  259800 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1017 18:56:38.011116  259800 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1017 18:56:38.011128  259800 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1017 18:56:38.053656  259800 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 18:56:38.053685  259800 cache.go:58] Caching tarball of preloaded images
	I1017 18:56:38.053888  259800 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 18:56:38.057137  259800 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1017 18:56:38.057175  259800 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1017 18:56:38.152344  259800 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1017 18:56:38.152396  259800 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21753-257739/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-290584 host does not exist
	  To start a cluster, run: "minikube start -p download-only-290584"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-290584
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1017 18:56:44.993298  259596 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-789835 --alsologtostderr --binary-mirror http://127.0.0.1:35757 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-789835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-789835
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-379549
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-379549: exit status 85 (69.364678ms)

                                                
                                                
-- stdout --
	* Profile "addons-379549" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-379549"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-379549
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-379549: exit status 85 (86.349087ms)

                                                
                                                
-- stdout --
	* Profile "addons-379549" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-379549"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (169.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-379549 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-379549 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m49.488630603s)
--- PASS: TestAddons/Setup (169.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-379549 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-379549 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.93s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-379549 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-379549 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [34227125-3da8-44cc-bbcf-a3085cf718b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [34227125-3da8-44cc-bbcf-a3085cf718b7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004512362s
addons_test.go:694: (dbg) Run:  kubectl --context addons-379549 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-379549 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-379549 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-379549 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.93s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-379549
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-379549: (12.177170606s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-379549
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-379549
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-379549
--- PASS: TestAddons/StoppedEnableDisable (12.48s)

                                                
                                    
x
+
TestCertOptions (38.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-533238 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-533238 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.320093686s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-533238 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-533238 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-533238 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-533238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-533238
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-533238: (2.165890036s)
--- PASS: TestCertOptions (38.22s)

                                                
                                    
x
+
TestCertExpiration (246.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-164379 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1017 20:01:48.230766  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-164379 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.880053581s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-164379 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-164379 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.823646254s)
helpers_test.go:175: Cleaning up "cert-expiration-164379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-164379
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-164379: (2.883022199s)
--- PASS: TestCertExpiration (246.59s)

                                                
                                    
x
+
TestForceSystemdFlag (45.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-285387 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-285387 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.859104083s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-285387 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-285387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-285387
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-285387: (2.787436235s)
--- PASS: TestForceSystemdFlag (45.07s)

                                                
                                    
x
+
TestForceSystemdEnv (45.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-945733 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-945733 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.817840681s)
helpers_test.go:175: Cleaning up "force-systemd-env-945733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-945733
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-945733: (3.113482978s)
--- PASS: TestForceSystemdEnv (45.93s)

                                                
                                    
x
+
TestErrorSpam/setup (34.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-605451 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-605451 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-605451 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-605451 --driver=docker  --container-runtime=crio: (34.306579561s)
--- PASS: TestErrorSpam/setup (34.31s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (6.28s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 pause: exit status 80 (1.803778968s)

                                                
                                                
-- stdout --
	* Pausing node nospam-605451 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:03:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 pause: exit status 80 (1.980723324s)

                                                
                                                
-- stdout --
	* Pausing node nospam-605451 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:03:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 pause: exit status 80 (2.492921539s)

                                                
                                                
-- stdout --
	* Pausing node nospam-605451 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:03:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.28s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.31s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 unpause: exit status 80 (1.437564948s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-605451 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:03:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 unpause: exit status 80 (1.552294186s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-605451 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:03:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 unpause: exit status 80 (2.315329764s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-605451 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T19:03:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.31s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 stop: (1.306502763s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-605451 --log_dir /tmp/nospam-605451 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21753-257739/.minikube/files/etc/test/nested/copy/259596/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-998954 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1017 19:04:36.128916  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:36.135314  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:36.146673  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:36.167943  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:36.209323  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:36.290724  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:36.452148  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:36.773681  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:37.415188  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:38.696902  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:41.259776  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:46.381748  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:04:56.623745  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:05:17.105298  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-998954 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m22.340246811s)
--- PASS: TestFunctional/serial/StartWithProxy (82.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1017 19:05:20.494124  259596 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-998954 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-998954 --alsologtostderr -v=8: (29.043857909s)
functional_test.go:678: soft start took 29.047919988s for "functional-998954" cluster.
I1017 19:05:49.538306  259596 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-998954 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 cache add registry.k8s.io/pause:3.1: (1.178160317s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 cache add registry.k8s.io/pause:3.3: (1.128926234s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 cache add registry.k8s.io/pause:latest: (1.095887016s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-998954 /tmp/TestFunctionalserialCacheCmdcacheadd_local3599925522/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 cache add minikube-local-cache-test:functional-998954
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 cache delete minikube-local-cache-test:functional-998954
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-998954
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.400024ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 kubectl -- --context functional-998954 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-998954 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-998954 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1017 19:05:58.066821  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-998954 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.055056558s)
functional_test.go:776: restart took 36.05520249s for "functional-998954" cluster.
I1017 19:06:32.849279  259596 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-998954 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 logs: (1.457008066s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 logs --file /tmp/TestFunctionalserialLogsFileCmd4178298679/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 logs --file /tmp/TestFunctionalserialLogsFileCmd4178298679/001/logs.txt: (1.481323003s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-998954 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-998954
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-998954: exit status 115 (371.22254ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30857 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-998954 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 config get cpus: exit status 14 (103.256385ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 config get cpus: exit status 14 (101.148879ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-998954 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-998954 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 286784: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-998954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-998954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (305.82291ms)

                                                
                                                
-- stdout --
	* [functional-998954] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:06:39.957075  281509 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:06:39.957312  281509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:39.957338  281509 out.go:374] Setting ErrFile to fd 2...
	I1017 19:06:39.957358  281509 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:06:39.957627  281509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:06:39.958012  281509 out.go:368] Setting JSON to false
	I1017 19:06:39.959036  281509 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6551,"bootTime":1760721449,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:06:39.959145  281509 start.go:141] virtualization:  
	I1017 19:06:39.964729  281509 out.go:179] * [functional-998954] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:06:39.967904  281509 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:06:39.967952  281509 notify.go:220] Checking for updates...
	I1017 19:06:39.972098  281509 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:06:39.975025  281509 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:06:39.978060  281509 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:06:39.981057  281509 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:06:39.983882  281509 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:06:39.987167  281509 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:06:39.987740  281509 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:06:40.041305  281509 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:06:40.041466  281509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:06:40.160289  281509 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 19:06:40.13640834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:06:40.160403  281509 docker.go:318] overlay module found
	I1017 19:06:40.163523  281509 out.go:179] * Using the docker driver based on existing profile
	I1017 19:06:40.166568  281509 start.go:305] selected driver: docker
	I1017 19:06:40.166610  281509 start.go:925] validating driver "docker" against &{Name:functional-998954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-998954 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:06:40.166759  281509 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:06:40.170552  281509 out.go:203] 
	W1017 19:06:40.174480  281509 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1017 19:06:40.177446  281509 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-998954 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-998954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-998954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (227.019063ms)

                                                
                                                
-- stdout --
	* [functional-998954] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:17:00.735683  285026 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:17:00.735866  285026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:17:00.735909  285026 out.go:374] Setting ErrFile to fd 2...
	I1017 19:17:00.735922  285026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:17:00.737587  285026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:17:00.738130  285026 out.go:368] Setting JSON to false
	I1017 19:17:00.739175  285026 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7172,"bootTime":1760721449,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 19:17:00.739252  285026 start.go:141] virtualization:  
	I1017 19:17:00.742532  285026 out.go:179] * [functional-998954] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1017 19:17:00.746402  285026 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:17:00.746540  285026 notify.go:220] Checking for updates...
	I1017 19:17:00.752465  285026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:17:00.756046  285026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 19:17:00.759073  285026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 19:17:00.762502  285026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:17:00.765473  285026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:17:00.768967  285026 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:17:00.769661  285026 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:17:00.798763  285026 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:17:00.798884  285026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:17:00.872232  285026 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 19:17:00.861770533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:17:00.872345  285026 docker.go:318] overlay module found
	I1017 19:17:00.875509  285026 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1017 19:17:00.878443  285026 start.go:305] selected driver: docker
	I1017 19:17:00.878465  285026 start.go:925] validating driver "docker" against &{Name:functional-998954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-998954 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:17:00.878582  285026 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:17:00.882230  285026 out.go:203] 
	W1017 19:17:00.884988  285026 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1017 19:17:00.887899  285026 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b652a91d-deb0-4351-bc1b-a8d7d12eb3e9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003176628s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-998954 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-998954 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-998954 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-998954 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [93bcae0a-e46d-4a03-a720-df855414ddee] Pending
helpers_test.go:352: "sp-pod" [93bcae0a-e46d-4a03-a720-df855414ddee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [93bcae0a-e46d-4a03-a720-df855414ddee] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00306715s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-998954 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-998954 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-998954 delete -f testdata/storage-provisioner/pod.yaml: (1.059059125s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-998954 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c4dc74b1-7146-4e66-bfef-3aac500111f3] Pending
helpers_test.go:352: "sp-pod" [c4dc74b1-7146-4e66-bfef-3aac500111f3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003825338s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-998954 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh -n functional-998954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 cp functional-998954:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd615479210/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh -n functional-998954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh -n functional-998954 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/259596/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo cat /etc/test/nested/copy/259596/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/259596.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo cat /etc/ssl/certs/259596.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/259596.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo cat /usr/share/ca-certificates/259596.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2595962.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo cat /etc/ssl/certs/2595962.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2595962.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo cat /usr/share/ca-certificates/2595962.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-998954 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 ssh "sudo systemctl is-active docker": exit status 1 (366.985765ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 ssh "sudo systemctl is-active containerd": exit status 1 (285.453439ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 version -o=json --components
2025/10/17 19:17:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 version -o=json --components: (1.266125993s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-998954 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-998954 image ls --format short --alsologtostderr:
I1017 19:17:24.057496  288080 out.go:360] Setting OutFile to fd 1 ...
I1017 19:17:24.057721  288080 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:17:24.057751  288080 out.go:374] Setting ErrFile to fd 2...
I1017 19:17:24.057771  288080 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:17:24.058193  288080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
I1017 19:17:24.059224  288080 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:17:24.059450  288080 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:17:24.060184  288080 cli_runner.go:164] Run: docker container inspect functional-998954 --format={{.State.Status}}
I1017 19:17:24.096073  288080 ssh_runner.go:195] Run: systemctl --version
I1017 19:17:24.096135  288080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-998954
I1017 19:17:24.118839  288080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/functional-998954/id_rsa Username:docker}
I1017 19:17:24.251858  288080 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-998954 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-998954 image ls --format table --alsologtostderr:
I1017 19:17:24.791758  288277 out.go:360] Setting OutFile to fd 1 ...
I1017 19:17:24.791983  288277 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:17:24.792010  288277 out.go:374] Setting ErrFile to fd 2...
I1017 19:17:24.792028  288277 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:17:24.792331  288277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
I1017 19:17:24.799771  288277 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:17:24.799986  288277 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:17:24.800964  288277 cli_runner.go:164] Run: docker container inspect functional-998954 --format={{.State.Status}}
I1017 19:17:24.819734  288277 ssh_runner.go:195] Run: systemctl --version
I1017 19:17:24.819788  288277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-998954
I1017 19:17:24.837749  288277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/functional-998954/id_rsa Username:docker}
I1017 19:17:24.954108  288277 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-998954 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf07325
40b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57d
bbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad04538440152972
1ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb309708
1e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc142
5bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-998954 image ls --format json --alsologtostderr:
I1017 19:17:24.215431  288116 out.go:360] Setting OutFile to fd 1 ...
I1017 19:17:24.215556  288116 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:17:24.215567  288116 out.go:374] Setting ErrFile to fd 2...
I1017 19:17:24.215573  288116 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:17:24.215834  288116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
I1017 19:17:24.216437  288116 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:17:24.216609  288116 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:17:24.217158  288116 cli_runner.go:164] Run: docker container inspect functional-998954 --format={{.State.Status}}
I1017 19:17:24.234782  288116 ssh_runner.go:195] Run: systemctl --version
I1017 19:17:24.234880  288116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-998954
I1017 19:17:24.257292  288116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/functional-998954/id_rsa Username:docker}
I1017 19:17:24.367882  288116 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-998954 image ls --format yaml --alsologtostderr:
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-998954 image ls --format yaml --alsologtostderr:
I1017 19:17:24.506381  288197 out.go:360] Setting OutFile to fd 1 ...
I1017 19:17:24.506519  288197 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:17:24.506529  288197 out.go:374] Setting ErrFile to fd 2...
I1017 19:17:24.506534  288197 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:17:24.506894  288197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
I1017 19:17:24.507818  288197 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:17:24.507964  288197 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:17:24.508841  288197 cli_runner.go:164] Run: docker container inspect functional-998954 --format={{.State.Status}}
I1017 19:17:24.539991  288197 ssh_runner.go:195] Run: systemctl --version
I1017 19:17:24.540052  288197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-998954
I1017 19:17:24.561681  288197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/functional-998954/id_rsa Username:docker}
I1017 19:17:24.668003  288197 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 ssh pgrep buildkitd: exit status 1 (331.600908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image build -t localhost/my-image:functional-998954 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 image build -t localhost/my-image:functional-998954 testdata/build --alsologtostderr: (3.346024423s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-998954 image build -t localhost/my-image:functional-998954 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0cbb3e077e1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-998954
--> ead21e5ecbc
Successfully tagged localhost/my-image:functional-998954
ead21e5ecbcc5c90550a5f7a9121a25247aa25ac398d273325a9ff031e943946
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-998954 image build -t localhost/my-image:functional-998954 testdata/build --alsologtostderr:
I1017 19:17:24.687876  288254 out.go:360] Setting OutFile to fd 1 ...
I1017 19:17:24.688722  288254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:17:24.688741  288254 out.go:374] Setting ErrFile to fd 2...
I1017 19:17:24.688748  288254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:17:24.689011  288254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
I1017 19:17:24.689804  288254 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:17:24.691078  288254 config.go:182] Loaded profile config "functional-998954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 19:17:24.691866  288254 cli_runner.go:164] Run: docker container inspect functional-998954 --format={{.State.Status}}
I1017 19:17:24.722010  288254 ssh_runner.go:195] Run: systemctl --version
I1017 19:17:24.722068  288254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-998954
I1017 19:17:24.742391  288254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/functional-998954/id_rsa Username:docker}
I1017 19:17:24.856464  288254 build_images.go:161] Building image from path: /tmp/build.3140037960.tar
I1017 19:17:24.856556  288254 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1017 19:17:24.868019  288254 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3140037960.tar
I1017 19:17:24.873438  288254 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3140037960.tar: stat -c "%s %y" /var/lib/minikube/build/build.3140037960.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3140037960.tar': No such file or directory
I1017 19:17:24.873471  288254 ssh_runner.go:362] scp /tmp/build.3140037960.tar --> /var/lib/minikube/build/build.3140037960.tar (3072 bytes)
I1017 19:17:24.898206  288254 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3140037960
I1017 19:17:24.906425  288254 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3140037960 -xf /var/lib/minikube/build/build.3140037960.tar
I1017 19:17:24.914313  288254 crio.go:315] Building image: /var/lib/minikube/build/build.3140037960
I1017 19:17:24.914382  288254 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-998954 /var/lib/minikube/build/build.3140037960 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1017 19:17:27.949108  288254 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-998954 /var/lib/minikube/build/build.3140037960 --cgroup-manager=cgroupfs: (3.034698363s)
I1017 19:17:27.949180  288254 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3140037960
I1017 19:17:27.957037  288254 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3140037960.tar
I1017 19:17:27.965330  288254 build_images.go:217] Built localhost/my-image:functional-998954 from /tmp/build.3140037960.tar
I1017 19:17:27.965363  288254 build_images.go:133] succeeded building to: functional-998954
I1017 19:17:27.965369  288254 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-998954
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image rm kicbase/echo-server:functional-998954 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "444.700519ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "66.513087ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "575.669861ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "75.932342ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-998954 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-998954 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-998954 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-998954 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 283513: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-998954 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-998954 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [87286287-a2fe-4217-a585-31e4b8303445] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [87286287-a2fe-4217-a585-31e4b8303445] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004413418s
I1017 19:06:56.616065  259596 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-998954 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.196.122 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-998954 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdany-port389916179/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760728620898666856" to /tmp/TestFunctionalparallelMountCmdany-port389916179/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760728620898666856" to /tmp/TestFunctionalparallelMountCmdany-port389916179/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760728620898666856" to /tmp/TestFunctionalparallelMountCmdany-port389916179/001/test-1760728620898666856
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (432.864119ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:17:01.334383  259596 retry.go:31] will retry after 504.6713ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 17 19:17 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 17 19:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 17 19:17 test-1760728620898666856
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh cat /mount-9p/test-1760728620898666856
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-998954 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [38a27064-f035-4515-8ffa-a13329ebef8a] Pending
helpers_test.go:352: "busybox-mount" [38a27064-f035-4515-8ffa-a13329ebef8a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [38a27064-f035-4515-8ffa-a13329ebef8a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [38a27064-f035-4515-8ffa-a13329ebef8a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003925969s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-998954 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdany-port389916179/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdspecific-port2646614653/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.039559ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:17:08.372850  259596 retry.go:31] will retry after 583.506949ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdspecific-port2646614653/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-998954 ssh "sudo umount -f /mount-9p": exit status 1 (271.556239ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-998954 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdspecific-port2646614653/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2359858645/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2359858645/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2359858645/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-998954 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2359858645/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2359858645/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-998954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2359858645/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 service list: (1.361297316s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-998954 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-998954 service list -o json: (1.402802577s)
functional_test.go:1504: Took "1.402877462s" to run "out/minikube-linux-arm64 -p functional-998954 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-998954
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-998954
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-998954
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1017 19:19:36.125163  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m19.021409065s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (199.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 kubectl -- rollout status deployment/busybox: (3.603709267s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-6xjlp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-979zm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-nc6x2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-6xjlp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-979zm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-nc6x2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-6xjlp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-979zm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-nc6x2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-6xjlp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-6xjlp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-979zm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-979zm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-nc6x2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 kubectl -- exec busybox-7b57f96db7-nc6x2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 node add --alsologtostderr -v 5
E1017 19:20:59.193138  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:48.231258  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:48.237678  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:48.249691  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:48.271124  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:48.312480  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:48.393848  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:48.555282  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:48.876922  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:49.518905  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:50.801225  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:21:53.362724  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 node add --alsologtostderr -v 5: (58.851943272s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5
E1017 19:21:58.484441  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5: (1.052591237s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-254035 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.279812502s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 status --output json --alsologtostderr -v 5: (1.108345072s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp testdata/cp-test.txt ha-254035:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035_ha-254035-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m02 "sudo cat /home/docker/cp-test_ha-254035_ha-254035-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035:/home/docker/cp-test.txt ha-254035-m03:/home/docker/cp-test_ha-254035_ha-254035-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m03 "sudo cat /home/docker/cp-test_ha-254035_ha-254035-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035:/home/docker/cp-test.txt ha-254035-m04:/home/docker/cp-test_ha-254035_ha-254035-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m04 "sudo cat /home/docker/cp-test_ha-254035_ha-254035-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp testdata/cp-test.txt ha-254035-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m02:/home/docker/cp-test.txt ha-254035:/home/docker/cp-test_ha-254035-m02_ha-254035.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035 "sudo cat /home/docker/cp-test_ha-254035-m02_ha-254035.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m02:/home/docker/cp-test.txt ha-254035-m03:/home/docker/cp-test_ha-254035-m02_ha-254035-m03.txt
E1017 19:22:08.726581  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m03 "sudo cat /home/docker/cp-test_ha-254035-m02_ha-254035-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m02:/home/docker/cp-test.txt ha-254035-m04:/home/docker/cp-test_ha-254035-m02_ha-254035-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m04 "sudo cat /home/docker/cp-test_ha-254035-m02_ha-254035-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp testdata/cp-test.txt ha-254035-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m03:/home/docker/cp-test.txt ha-254035:/home/docker/cp-test_ha-254035-m03_ha-254035.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035 "sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m03:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035-m03_ha-254035-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m02 "sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m03:/home/docker/cp-test.txt ha-254035-m04:/home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m04 "sudo cat /home/docker/cp-test_ha-254035-m03_ha-254035-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp testdata/cp-test.txt ha-254035-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1188979754/001/cp-test_ha-254035-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035:/home/docker/cp-test_ha-254035-m04_ha-254035.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035 "sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m02:/home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m02 "sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 cp ha-254035-m04:/home/docker/cp-test.txt ha-254035-m03:/home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 ssh -n ha-254035-m03 "sudo cat /home/docker/cp-test_ha-254035-m04_ha-254035-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 node stop m02 --alsologtostderr -v 5
E1017 19:22:29.208668  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 node stop m02 --alsologtostderr -v 5: (12.06111196s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5: exit status 7 (797.561468ms)

                                                
                                                
-- stdout --
	ha-254035
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-254035-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-254035-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-254035-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:22:32.017744  303107 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:22:32.018053  303107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:22:32.018085  303107 out.go:374] Setting ErrFile to fd 2...
	I1017 19:22:32.018105  303107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:22:32.018729  303107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:22:32.019095  303107 out.go:368] Setting JSON to false
	I1017 19:22:32.019127  303107 mustload.go:65] Loading cluster: ha-254035
	I1017 19:22:32.019510  303107 config.go:182] Loaded profile config "ha-254035": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:22:32.019527  303107 status.go:174] checking status of ha-254035 ...
	I1017 19:22:32.020031  303107 cli_runner.go:164] Run: docker container inspect ha-254035 --format={{.State.Status}}
	I1017 19:22:32.020580  303107 notify.go:220] Checking for updates...
	I1017 19:22:32.048211  303107 status.go:371] ha-254035 host status = "Running" (err=<nil>)
	I1017 19:22:32.048238  303107 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:22:32.048594  303107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035
	I1017 19:22:32.078893  303107 host.go:66] Checking if "ha-254035" exists ...
	I1017 19:22:32.079203  303107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:22:32.079262  303107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035
	I1017 19:22:32.102667  303107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035/id_rsa Username:docker}
	I1017 19:22:32.213978  303107 ssh_runner.go:195] Run: systemctl --version
	I1017 19:22:32.221052  303107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:22:32.234657  303107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:22:32.292049  303107 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-17 19:22:32.281994904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:22:32.292779  303107 kubeconfig.go:125] found "ha-254035" server: "https://192.168.49.254:8443"
	I1017 19:22:32.292814  303107 api_server.go:166] Checking apiserver status ...
	I1017 19:22:32.292866  303107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:22:32.306380  303107 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup
	I1017 19:22:32.315147  303107 api_server.go:182] apiserver freezer: "2:freezer:/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio/crio-09028773f77ac351a6c0764524ec9c547d78dc7bb648fdc1f51503a4e67ee3ae"
	I1017 19:22:32.315217  303107 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7f770318d5dc3f766c8eb022ffe03152042c18f5636af2c3a200c94c1a08c2f8/crio/crio-09028773f77ac351a6c0764524ec9c547d78dc7bb648fdc1f51503a4e67ee3ae/freezer.state
	I1017 19:22:32.324514  303107 api_server.go:204] freezer state: "THAWED"
	I1017 19:22:32.324556  303107 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 19:22:32.333513  303107 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 19:22:32.333541  303107 status.go:463] ha-254035 apiserver status = Running (err=<nil>)
	I1017 19:22:32.333552  303107 status.go:176] ha-254035 status: &{Name:ha-254035 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:22:32.333568  303107 status.go:174] checking status of ha-254035-m02 ...
	I1017 19:22:32.333857  303107 cli_runner.go:164] Run: docker container inspect ha-254035-m02 --format={{.State.Status}}
	I1017 19:22:32.351141  303107 status.go:371] ha-254035-m02 host status = "Stopped" (err=<nil>)
	I1017 19:22:32.351166  303107 status.go:384] host is not running, skipping remaining checks
	I1017 19:22:32.351174  303107 status.go:176] ha-254035-m02 status: &{Name:ha-254035-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:22:32.351193  303107 status.go:174] checking status of ha-254035-m03 ...
	I1017 19:22:32.351697  303107 cli_runner.go:164] Run: docker container inspect ha-254035-m03 --format={{.State.Status}}
	I1017 19:22:32.368810  303107 status.go:371] ha-254035-m03 host status = "Running" (err=<nil>)
	I1017 19:22:32.368838  303107 host.go:66] Checking if "ha-254035-m03" exists ...
	I1017 19:22:32.369140  303107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m03
	I1017 19:22:32.386662  303107 host.go:66] Checking if "ha-254035-m03" exists ...
	I1017 19:22:32.386967  303107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:22:32.387017  303107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m03
	I1017 19:22:32.405356  303107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m03/id_rsa Username:docker}
	I1017 19:22:32.514107  303107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:22:32.527759  303107 kubeconfig.go:125] found "ha-254035" server: "https://192.168.49.254:8443"
	I1017 19:22:32.527790  303107 api_server.go:166] Checking apiserver status ...
	I1017 19:22:32.527831  303107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:22:32.538933  303107 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	I1017 19:22:32.547889  303107 api_server.go:182] apiserver freezer: "2:freezer:/docker/302e05a98c62ddb3b6a61bc8c8013b1739d38558accd3bb95e0763044ef85f99/crio/crio-30531c6c1b6d879d864c32daeb5e8ad2ea2aaf498ce47c78518ab3894cbe4cd9"
	I1017 19:22:32.548015  303107 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/302e05a98c62ddb3b6a61bc8c8013b1739d38558accd3bb95e0763044ef85f99/crio/crio-30531c6c1b6d879d864c32daeb5e8ad2ea2aaf498ce47c78518ab3894cbe4cd9/freezer.state
	I1017 19:22:32.556203  303107 api_server.go:204] freezer state: "THAWED"
	I1017 19:22:32.556239  303107 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 19:22:32.564976  303107 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 19:22:32.565025  303107 status.go:463] ha-254035-m03 apiserver status = Running (err=<nil>)
	I1017 19:22:32.565035  303107 status.go:176] ha-254035-m03 status: &{Name:ha-254035-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:22:32.565077  303107 status.go:174] checking status of ha-254035-m04 ...
	I1017 19:22:32.565416  303107 cli_runner.go:164] Run: docker container inspect ha-254035-m04 --format={{.State.Status}}
	I1017 19:22:32.586894  303107 status.go:371] ha-254035-m04 host status = "Running" (err=<nil>)
	I1017 19:22:32.586924  303107 host.go:66] Checking if "ha-254035-m04" exists ...
	I1017 19:22:32.587275  303107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-254035-m04
	I1017 19:22:32.615132  303107 host.go:66] Checking if "ha-254035-m04" exists ...
	I1017 19:22:32.615519  303107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:22:32.615579  303107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-254035-m04
	I1017 19:22:32.635150  303107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/ha-254035-m04/id_rsa Username:docker}
	I1017 19:22:32.742258  303107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:22:32.755893  303107 status.go:176] ha-254035-m04 status: &{Name:ha-254035-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 node start m02 --alsologtostderr -v 5: (29.90251028s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-254035 status --alsologtostderr -v 5: (1.310749979s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.31127494s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-999484 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1017 19:36:48.231341  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-999484 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.573336353s)
--- PASS: TestJSONOutput/start/Command (80.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-999484 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-999484 --output=json --user=testUser: (5.842796217s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-889215 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-889215 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (98.971889ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"802d1d1f-6390-434e-8618-3a47a04f7dab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-889215] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3045d50-c5ec-4b8f-a0c6-9e7bb45115d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21753"}}
	{"specversion":"1.0","id":"e7498680-fc06-4c31-965b-a64e528f0eaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fd03e2c7-a59f-4dad-a894-28d9bcb8ad94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig"}}
	{"specversion":"1.0","id":"84c24746-abaf-4c6c-88f0-d71bef077635","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube"}}
	{"specversion":"1.0","id":"b885f381-191a-4d89-b7e1-b8ee5ef9d4bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"85a9de28-7ad3-4428-a133-b84ae57f3cbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ce59ae5e-20f9-4bfb-ae7f-c8d3d12cbe83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-889215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-889215
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-216843 --network=
E1017 19:37:39.196121  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-216843 --network=: (39.290290637s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-216843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-216843
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-216843: (2.284188597s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-410721 --network=bridge
E1017 19:38:11.296101  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-410721 --network=bridge: (35.186242293s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-410721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-410721
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-410721: (2.141557426s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.36s)

                                                
                                    
x
+
TestKicExistingNetwork (35.5s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1017 19:38:45.224214  259596 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1017 19:38:45.274521  259596 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1017 19:38:45.276300  259596 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1017 19:38:45.276399  259596 cli_runner.go:164] Run: docker network inspect existing-network
W1017 19:38:45.322237  259596 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1017 19:38:45.322345  259596 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1017 19:38:45.322421  259596 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1017 19:38:45.322645  259596 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1017 19:38:45.351410  259596 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9f667d9c3ea2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:fc:1d:c6:d2:da} reservation:<nil>}
I1017 19:38:45.356184  259596 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1017 19:38:45.357567  259596 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001de1990}
I1017 19:38:45.358264  259596 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1017 19:38:45.358559  259596 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1017 19:38:45.447470  259596 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-495805 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-495805 --network=existing-network: (33.115199437s)
helpers_test.go:175: Cleaning up "existing-network-495805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-495805
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-495805: (2.124004519s)
I1017 19:39:20.704901  259596 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.50s)

                                                
                                    
x
+
TestKicCustomSubnet (39.95s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-051212 --subnet=192.168.60.0/24
E1017 19:39:36.132704  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-051212 --subnet=192.168.60.0/24: (37.402064358s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-051212 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-051212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-051212
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-051212: (2.52709067s)
--- PASS: TestKicCustomSubnet (39.95s)

                                                
                                    
x
+
TestKicStaticIP (37.98s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-577771 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-577771 --static-ip=192.168.200.200: (35.613060201s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-577771 ip
helpers_test.go:175: Cleaning up "static-ip-577771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-577771
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-577771: (2.212159566s)
--- PASS: TestKicStaticIP (37.98s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (73.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-681210 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-681210 --driver=docker  --container-runtime=crio: (32.804378755s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-683786 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-683786 --driver=docker  --container-runtime=crio: (35.106443064s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-681210
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-683786
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-683786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-683786
E1017 19:41:48.231380  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-683786: (2.077628677s)
helpers_test.go:175: Cleaning up "first-681210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-681210
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-681210: (2.095971922s)
--- PASS: TestMinikubeProfile (73.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-335448 --memory=3072 --mount-string /tmp/TestMountStartserial1286852028/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-335448 --memory=3072 --mount-string /tmp/TestMountStartserial1286852028/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.532628582s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-335448 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-337318 --memory=3072 --mount-string /tmp/TestMountStartserial1286852028/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-337318 --memory=3072 --mount-string /tmp/TestMountStartserial1286852028/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.75781726s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-337318 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-335448 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-335448 --alsologtostderr -v=5: (1.713245986s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-337318 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-337318
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-337318: (1.30002351s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-337318
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-337318: (6.627476339s)
--- PASS: TestMountStart/serial/RestartStopped (7.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-337318 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-897553 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1017 19:44:36.125171  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-897553 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m17.766106815s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-897553 -- rollout status deployment/busybox: (2.575191227s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- exec busybox-7b57f96db7-6b5lc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- exec busybox-7b57f96db7-b7l6x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- exec busybox-7b57f96db7-6b5lc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- exec busybox-7b57f96db7-b7l6x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- exec busybox-7b57f96db7-6b5lc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- exec busybox-7b57f96db7-b7l6x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- exec busybox-7b57f96db7-6b5lc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- exec busybox-7b57f96db7-6b5lc -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- exec busybox-7b57f96db7-b7l6x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-897553 -- exec busybox-7b57f96db7-b7l6x -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.12s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-897553 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-897553 -v=5 --alsologtostderr: (57.707861973s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-897553 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp testdata/cp-test.txt multinode-897553:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp multinode-897553:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3918163797/001/cp-test_multinode-897553.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp multinode-897553:/home/docker/cp-test.txt multinode-897553-m02:/home/docker/cp-test_multinode-897553_multinode-897553-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m02 "sudo cat /home/docker/cp-test_multinode-897553_multinode-897553-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp multinode-897553:/home/docker/cp-test.txt multinode-897553-m03:/home/docker/cp-test_multinode-897553_multinode-897553-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m03 "sudo cat /home/docker/cp-test_multinode-897553_multinode-897553-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp testdata/cp-test.txt multinode-897553-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp multinode-897553-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3918163797/001/cp-test_multinode-897553-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp multinode-897553-m02:/home/docker/cp-test.txt multinode-897553:/home/docker/cp-test_multinode-897553-m02_multinode-897553.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553 "sudo cat /home/docker/cp-test_multinode-897553-m02_multinode-897553.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp multinode-897553-m02:/home/docker/cp-test.txt multinode-897553-m03:/home/docker/cp-test_multinode-897553-m02_multinode-897553-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m03 "sudo cat /home/docker/cp-test_multinode-897553-m02_multinode-897553-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp testdata/cp-test.txt multinode-897553-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp multinode-897553-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3918163797/001/cp-test_multinode-897553-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp multinode-897553-m03:/home/docker/cp-test.txt multinode-897553:/home/docker/cp-test_multinode-897553-m03_multinode-897553.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553 "sudo cat /home/docker/cp-test_multinode-897553-m03_multinode-897553.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 cp multinode-897553-m03:/home/docker/cp-test.txt multinode-897553-m02:/home/docker/cp-test_multinode-897553-m03_multinode-897553-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 ssh -n multinode-897553-m02 "sudo cat /home/docker/cp-test_multinode-897553-m03_multinode-897553-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-897553 node stop m03: (1.312604295s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-897553 status: exit status 7 (528.309528ms)

                                                
                                                
-- stdout --
	multinode-897553
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-897553-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-897553-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-897553 status --alsologtostderr: exit status 7 (563.158231ms)

                                                
                                                
-- stdout --
	multinode-897553
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-897553-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-897553-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:45:57.620985  379407 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:45:57.621305  379407 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:45:57.621325  379407 out.go:374] Setting ErrFile to fd 2...
	I1017 19:45:57.621330  379407 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:45:57.621671  379407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:45:57.621904  379407 out.go:368] Setting JSON to false
	I1017 19:45:57.621934  379407 mustload.go:65] Loading cluster: multinode-897553
	I1017 19:45:57.622598  379407 config.go:182] Loaded profile config "multinode-897553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:45:57.622620  379407 status.go:174] checking status of multinode-897553 ...
	I1017 19:45:57.623334  379407 cli_runner.go:164] Run: docker container inspect multinode-897553 --format={{.State.Status}}
	I1017 19:45:57.623578  379407 notify.go:220] Checking for updates...
	I1017 19:45:57.649358  379407 status.go:371] multinode-897553 host status = "Running" (err=<nil>)
	I1017 19:45:57.649380  379407 host.go:66] Checking if "multinode-897553" exists ...
	I1017 19:45:57.649695  379407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-897553
	I1017 19:45:57.676661  379407 host.go:66] Checking if "multinode-897553" exists ...
	I1017 19:45:57.677044  379407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:45:57.677123  379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-897553
	I1017 19:45:57.695479  379407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33264 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/multinode-897553/id_rsa Username:docker}
	I1017 19:45:57.802085  379407 ssh_runner.go:195] Run: systemctl --version
	I1017 19:45:57.808743  379407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:45:57.821776  379407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:45:57.880286  379407 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 19:45:57.870633771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:45:57.880935  379407 kubeconfig.go:125] found "multinode-897553" server: "https://192.168.58.2:8443"
	I1017 19:45:57.880970  379407 api_server.go:166] Checking apiserver status ...
	I1017 19:45:57.881025  379407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:45:57.893211  379407 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup
	I1017 19:45:57.901794  379407 api_server.go:182] apiserver freezer: "2:freezer:/docker/bb05f6a170505a76d68f35368a239da5c680572c312cf7b83f4bd38f1eaf0a60/crio/crio-a2d1f095742bd792eaf87a913e653fb16fe3842a7e8d30c5b6d1ad0ed769215f"
	I1017 19:45:57.901880  379407 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bb05f6a170505a76d68f35368a239da5c680572c312cf7b83f4bd38f1eaf0a60/crio/crio-a2d1f095742bd792eaf87a913e653fb16fe3842a7e8d30c5b6d1ad0ed769215f/freezer.state
	I1017 19:45:57.909485  379407 api_server.go:204] freezer state: "THAWED"
	I1017 19:45:57.909515  379407 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1017 19:45:57.917878  379407 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1017 19:45:57.917907  379407 status.go:463] multinode-897553 apiserver status = Running (err=<nil>)
	I1017 19:45:57.917941  379407 status.go:176] multinode-897553 status: &{Name:multinode-897553 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:45:57.917965  379407 status.go:174] checking status of multinode-897553-m02 ...
	I1017 19:45:57.918304  379407 cli_runner.go:164] Run: docker container inspect multinode-897553-m02 --format={{.State.Status}}
	I1017 19:45:57.934754  379407 status.go:371] multinode-897553-m02 host status = "Running" (err=<nil>)
	I1017 19:45:57.934780  379407 host.go:66] Checking if "multinode-897553-m02" exists ...
	I1017 19:45:57.935078  379407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-897553-m02
	I1017 19:45:57.951670  379407 host.go:66] Checking if "multinode-897553-m02" exists ...
	I1017 19:45:57.951978  379407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:45:57.952025  379407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-897553-m02
	I1017 19:45:57.969309  379407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33269 SSHKeyPath:/home/jenkins/minikube-integration/21753-257739/.minikube/machines/multinode-897553-m02/id_rsa Username:docker}
	I1017 19:45:58.073164  379407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:45:58.098430  379407 status.go:176] multinode-897553-m02 status: &{Name:multinode-897553-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:45:58.098510  379407 status.go:174] checking status of multinode-897553-m03 ...
	I1017 19:45:58.098881  379407 cli_runner.go:164] Run: docker container inspect multinode-897553-m03 --format={{.State.Status}}
	I1017 19:45:58.120972  379407 status.go:371] multinode-897553-m03 host status = "Stopped" (err=<nil>)
	I1017 19:45:58.120998  379407 status.go:384] host is not running, skipping remaining checks
	I1017 19:45:58.121005  379407 status.go:176] multinode-897553-m03 status: &{Name:multinode-897553-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-897553 node start m03 -v=5 --alsologtostderr: (7.106609993s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-897553
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-897553
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-897553: (25.062060919s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-897553 --wait=true -v=5 --alsologtostderr
E1017 19:46:48.231771  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-897553 --wait=true -v=5 --alsologtostderr: (48.723088729s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-897553
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-897553 node delete m03: (4.951997138s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-897553 stop: (23.898497344s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-897553 status: exit status 7 (86.226354ms)

                                                
                                                
-- stdout --
	multinode-897553
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-897553-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-897553 status --alsologtostderr: exit status 7 (104.308408ms)

                                                
                                                
-- stdout --
	multinode-897553
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-897553-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:47:49.628748  387227 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:47:49.628860  387227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:47:49.628869  387227 out.go:374] Setting ErrFile to fd 2...
	I1017 19:47:49.628873  387227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:47:49.629125  387227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 19:47:49.629305  387227 out.go:368] Setting JSON to false
	I1017 19:47:49.629338  387227 mustload.go:65] Loading cluster: multinode-897553
	I1017 19:47:49.629696  387227 config.go:182] Loaded profile config "multinode-897553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:47:49.629705  387227 status.go:174] checking status of multinode-897553 ...
	I1017 19:47:49.630209  387227 cli_runner.go:164] Run: docker container inspect multinode-897553 --format={{.State.Status}}
	I1017 19:47:49.632749  387227 notify.go:220] Checking for updates...
	I1017 19:47:49.653378  387227 status.go:371] multinode-897553 host status = "Stopped" (err=<nil>)
	I1017 19:47:49.653409  387227 status.go:384] host is not running, skipping remaining checks
	I1017 19:47:49.653416  387227 status.go:176] multinode-897553 status: &{Name:multinode-897553 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:47:49.653442  387227 status.go:174] checking status of multinode-897553-m02 ...
	I1017 19:47:49.653757  387227 cli_runner.go:164] Run: docker container inspect multinode-897553-m02 --format={{.State.Status}}
	I1017 19:47:49.681936  387227 status.go:371] multinode-897553-m02 host status = "Stopped" (err=<nil>)
	I1017 19:47:49.681962  387227 status.go:384] host is not running, skipping remaining checks
	I1017 19:47:49.681969  387227 status.go:176] multinode-897553-m02 status: &{Name:multinode-897553-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-897553 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-897553 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.357601981s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-897553 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-897553
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-897553-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-897553-m02 --driver=docker  --container-runtime=crio: exit status 14 (91.835082ms)

                                                
                                                
-- stdout --
	* [multinode-897553-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-897553-m02' is duplicated with machine name 'multinode-897553-m02' in profile 'multinode-897553'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-897553-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-897553-m03 --driver=docker  --container-runtime=crio: (40.483170791s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-897553
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-897553: exit status 80 (366.883398ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-897553 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-897553-m03 already exists in multinode-897553-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-897553-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-897553-m03: (2.061391403s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.06s)

                                                
                                    
x
+
TestPreload (135.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-697207 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1017 19:49:36.125494  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-697207 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m4.480484193s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-697207 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-697207 image pull gcr.io/k8s-minikube/busybox: (1.966120534s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-697207
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-697207: (5.903921504s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-697207 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-697207 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m0.668932349s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-697207 image list
helpers_test.go:175: Cleaning up "test-preload-697207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-697207
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-697207: (2.463644483s)
--- PASS: TestPreload (135.71s)

                                                
                                    
x
+
TestScheduledStopUnix (113.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-474750 --memory=3072 --driver=docker  --container-runtime=crio
E1017 19:51:48.231491  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-474750 --memory=3072 --driver=docker  --container-runtime=crio: (37.324478956s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-474750 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-474750 -n scheduled-stop-474750
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-474750 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1017 19:52:18.662738  259596 retry.go:31] will retry after 114.932µs: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.663114  259596 retry.go:31] will retry after 130.316µs: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.664241  259596 retry.go:31] will retry after 177.477µs: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.665363  259596 retry.go:31] will retry after 466.79µs: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.666488  259596 retry.go:31] will retry after 438.658µs: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.667607  259596 retry.go:31] will retry after 687.555µs: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.668714  259596 retry.go:31] will retry after 823.061µs: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.669768  259596 retry.go:31] will retry after 2.159433ms: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.672908  259596 retry.go:31] will retry after 3.686155ms: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.678115  259596 retry.go:31] will retry after 4.897237ms: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.683332  259596 retry.go:31] will retry after 7.659192ms: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.691481  259596 retry.go:31] will retry after 9.408222ms: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.701692  259596 retry.go:31] will retry after 6.916906ms: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.708922  259596 retry.go:31] will retry after 11.958109ms: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.721123  259596 retry.go:31] will retry after 26.927132ms: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
I1017 19:52:18.748366  259596 retry.go:31] will retry after 39.258187ms: open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/scheduled-stop-474750/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-474750 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-474750 -n scheduled-stop-474750
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-474750
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-474750 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-474750
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-474750: exit status 7 (71.189124ms)

                                                
                                                
-- stdout --
	scheduled-stop-474750
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-474750 -n scheduled-stop-474750
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-474750 -n scheduled-stop-474750: exit status 7 (66.392016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-474750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-474750
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-474750: (4.542378574s)
--- PASS: TestScheduledStopUnix (113.46s)

                                                
                                    
x
+
TestInsufficientStorage (11.78s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-208159 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-208159 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.164015631s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a5eefb80-ce15-46af-86ad-ef97c069833f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-208159] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d40ca04-f578-4c14-9eea-cbf3a05e1521","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21753"}}
	{"specversion":"1.0","id":"b6e858d8-889e-4f01-a13c-007fc82814b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14a11d34-57d7-4210-8b2f-9ff04376210f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig"}}
	{"specversion":"1.0","id":"378d8ecc-3386-41c8-bb99-208a5da9af64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube"}}
	{"specversion":"1.0","id":"2207dfad-ca2f-4d70-97b0-7c44cacf6243","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8c6d033b-dd8b-4787-b43c-0a6fdea22ebf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"066b128a-39d3-4516-a99e-69e408ebdad6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"fc1ff0f0-778f-4c5f-9ff7-a878c6ce970e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4fa967ea-87ab-4c8a-a099-62c4336b8935","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4525d71-6118-4f2f-886b-761fb290fde2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4295819c-37f5-4db6-abd4-082de70f4b05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-208159\" primary control-plane node in \"insufficient-storage-208159\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"903b4524-75e3-4c44-b430-cd9a86f46c82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c33e6486-fda8-4417-b5f6-dc770eea70b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c85d2e9d-c198-4e54-8921-f7abdd3f2bf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-208159 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-208159 --output=json --layout=cluster: exit status 7 (314.136243ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-208159","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-208159","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1017 19:53:43.734727  403513 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-208159" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-208159 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-208159 --output=json --layout=cluster: exit status 7 (304.969872ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-208159","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-208159","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1017 19:53:44.040783  403579 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-208159" does not appear in /home/jenkins/minikube-integration/21753-257739/kubeconfig
	E1017 19:53:44.051831  403579 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/insufficient-storage-208159/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-208159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-208159
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-208159: (1.990296558s)
--- PASS: TestInsufficientStorage (11.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E1017 19:56:48.230978  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3401140044 start -p running-upgrade-866281 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3401140044 start -p running-upgrade-866281 --memory=3072 --vm-driver=docker  --container-runtime=crio: (31.758285524s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-866281 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-866281 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.91996442s)
helpers_test.go:175: Cleaning up "running-upgrade-866281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-866281
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-866281: (1.983402085s)
--- PASS: TestRunningBinaryUpgrade (53.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (361.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.565925388s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-819667
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-819667: (2.209809598s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-819667 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-819667 status --format={{.Host}}: exit status 7 (220.164011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.179119172s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-819667 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (101.555287ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-819667] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-819667
	    minikube start -p kubernetes-upgrade-819667 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8196672 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-819667 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-819667 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.369311802s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-819667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-819667
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-819667: (2.465594325s)
--- PASS: TestKubernetesUpgrade (361.26s)

                                                
                                    
x
+
TestMissingContainerUpgrade (118.42s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3082742622 start -p missing-upgrade-672083 --memory=3072 --driver=docker  --container-runtime=crio
E1017 19:54:19.197447  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3082742622 start -p missing-upgrade-672083 --memory=3072 --driver=docker  --container-runtime=crio: (1m5.987161463s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-672083
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-672083
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-672083 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-672083 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.489971534s)
helpers_test.go:175: Cleaning up "missing-upgrade-672083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-672083
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-672083: (2.277931107s)
--- PASS: TestMissingContainerUpgrade (118.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-731142 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-731142 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (103.48959ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-731142] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-731142 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-731142 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.150328837s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-731142 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-731142 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1017 19:54:36.125415  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-731142 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.502802882s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-731142 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-731142 status -o json: exit status 2 (527.491326ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-731142","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-731142
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-731142: (2.564809878s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-731142 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1017 19:54:51.298036  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-731142 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.930345774s)
--- PASS: TestNoKubernetes/serial/Start (10.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-731142 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-731142 "sudo systemctl is-active --quiet service kubelet": exit status 1 (352.83895ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-731142
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-731142: (1.285436463s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-731142 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-731142 --driver=docker  --container-runtime=crio: (7.093203137s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-731142 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-731142 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.589686ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (59.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.829680669 start -p stopped-upgrade-771448 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.829680669 start -p stopped-upgrade-771448 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.441007438s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.829680669 -p stopped-upgrade-771448 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.829680669 -p stopped-upgrade-771448 stop: (1.242615958s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-771448 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-771448 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.803521958s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (59.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-771448
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-771448: (1.450189106s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                    
x
+
TestPause/serial/Start (79.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-217784 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-217784 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m19.185686003s)
--- PASS: TestPause/serial/Start (79.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (101.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-217784 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1017 19:59:36.125795  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-217784 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m41.048845095s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (101.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-804622 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-804622 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (255.511875ms)

                                                
                                                
-- stdout --
	* [false-804622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:01:10.830950  441578 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:01:10.831182  441578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:10.831210  441578 out.go:374] Setting ErrFile to fd 2...
	I1017 20:01:10.831232  441578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:10.831527  441578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-257739/.minikube/bin
	I1017 20:01:10.831999  441578 out.go:368] Setting JSON to false
	I1017 20:01:10.833000  441578 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9822,"bootTime":1760721449,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1017 20:01:10.833085  441578 start.go:141] virtualization:  
	I1017 20:01:10.840566  441578 out.go:179] * [false-804622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:01:10.843815  441578 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 20:01:10.843878  441578 notify.go:220] Checking for updates...
	I1017 20:01:10.849985  441578 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:01:10.852958  441578 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-257739/kubeconfig
	I1017 20:01:10.855844  441578 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-257739/.minikube
	I1017 20:01:10.859352  441578 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:01:10.862548  441578 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:01:10.865775  441578 config.go:182] Loaded profile config "force-systemd-flag-285387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:10.865906  441578 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:01:10.889229  441578 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:01:10.889342  441578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:01:10.993111  441578 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 20:01:10.983037558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:01:10.993208  441578 docker.go:318] overlay module found
	I1017 20:01:10.996271  441578 out.go:179] * Using the docker driver based on user configuration
	I1017 20:01:10.999007  441578 start.go:305] selected driver: docker
	I1017 20:01:10.999024  441578 start.go:925] validating driver "docker" against <nil>
	I1017 20:01:10.999051  441578 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:01:11.002631  441578 out.go:203] 
	W1017 20:01:11.005754  441578 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1017 20:01:11.008632  441578 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-804622 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-804622" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-804622

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-804622"

                                                
                                                
----------------------- debugLogs end: false-804622 [took: 4.41956028s] --------------------------------
helpers_test.go:175: Cleaning up "false-804622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-804622
--- PASS: TestNetworkPlugins/group/false (4.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.194859471s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-135652 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [38081228-78de-468b-b2de-1ee71ee84cac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [38081228-78de-468b-b2de-1ee71ee84cac] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003650909s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-135652 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-135652 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-135652 --alsologtostderr -v=3: (12.008932348s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135652 -n old-k8s-version-135652
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135652 -n old-k8s-version-135652: exit status 7 (77.59208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-135652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1017 20:04:36.125988  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/addons-379549/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-135652 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.882836878s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135652 -n old-k8s-version-135652
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xwfgw" [cc2416cb-c5d5-48c6-870f-828b378c0b23] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003099765s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xwfgw" [cc2416cb-c5d5-48c6-870f-828b378c0b23] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003375519s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-135652 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-135652 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m20.40325462s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.345273402s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-413711 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e8776954-7870-4b04-a178-bc73c09ccec1] Pending
helpers_test.go:352: "busybox" [e8776954-7870-4b04-a178-bc73c09ccec1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e8776954-7870-4b04-a178-bc73c09ccec1] Running
E1017 20:06:48.230938  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003888875s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-413711 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-413711 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-413711 --alsologtostderr -v=3: (12.089206914s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-413711 -n no-preload-413711
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-413711 -n no-preload-413711: exit status 7 (68.545893ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-413711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (27.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-413711 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (27.289713524s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-413711 -n no-preload-413711
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (27.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-572724 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5f8cf53e-8a62-4677-8c9e-ec9aee8c1cbd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5f8cf53e-8a62-4677-8c9e-ec9aee8c1cbd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00454396s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-572724 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-572724 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-572724 --alsologtostderr -v=3: (12.324209502s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s7s2d" [4c45f2f1-d92a-465c-84fd-c82ef9c49fda] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003311505s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-572724 -n embed-certs-572724
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-572724 -n embed-certs-572724: exit status 7 (72.236393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-572724 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-572724 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.102024184s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-572724 -n embed-certs-572724
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-s7s2d" [4c45f2f1-d92a-465c-84fd-c82ef9c49fda] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004152029s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-413711 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-413711 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m29.426607802s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2gxq5" [5070c655-fe42-4815-a448-d8d4f574d03a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003807087s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2gxq5" [5070c655-fe42-4815-a448-d8d4f574d03a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003072657s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-572724 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-572724 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1017 20:08:50.348298  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:08:52.911055  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:08:58.032506  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:09:08.273831  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (37.320033575s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-740780 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a22cdfdc-f249-4c36-b136-1e956a4ac0f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a22cdfdc-f249-4c36-b136-1e956a4ac0f0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004780747s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-740780 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-718789 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-718789 --alsologtostderr -v=3: (1.346047463s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718789 -n newest-cni-718789
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718789 -n newest-cni-718789: exit status 7 (84.072824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-718789 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-718789 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (16.163992109s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-718789 -n newest-cni-718789
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-740780 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-740780 --alsologtostderr -v=3: (12.412122314s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-718789 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780: exit status 7 (77.050884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-740780 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-740780 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.910178987s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-740780 -n default-k8s-diff-port-740780
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1017 20:10:09.717380  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.375393922s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rm6kw" [957b8ab9-0704-4c13-a3ab-a17691e5e2c1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003118415s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rm6kw" [957b8ab9-0704-4c13-a3ab-a17691e5e2c1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003706106s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-740780 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-740780 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m21.041836874s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-804622 "pgrep -a kubelet"
I1017 20:11:23.549260  259596 config.go:182] Loaded profile config "auto-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-804622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mmg4t" [899c6325-fcc0-45a2-9c36-f8fe7a11b749] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mmg4t" [899c6325-fcc0-45a2-9c36-f8fe7a11b749] Running
E1017 20:11:31.302298  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/functional-998954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:11:31.639195  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004072684s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-804622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (65.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1017 20:12:00.976680  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:12:21.458310  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m5.220563686s)
--- PASS: TestNetworkPlugins/group/calico/Start (65.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-w4c79" [ef15151d-edde-46f1-b88a-2216f69a587b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004186762s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-804622 "pgrep -a kubelet"
I1017 20:12:30.703240  259596 config.go:182] Loaded profile config "kindnet-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-804622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dznqb" [4f925246-1d53-48c8-bf3c-4c0aad717b2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dznqb" [4f925246-1d53-48c8-bf3c-4c0aad717b2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003329581s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-804622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5hd6g" [f5b0056c-e451-4534-a707-13d6b1145ab1] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1017 20:13:02.420762  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-5hd6g" [f5b0056c-e451-4534-a707-13d6b1145ab1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005009387s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-804622 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
I1017 20:13:08.267941  259596 config.go:182] Loaded profile config "calico-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m9.046396621s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-804622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gsr5b" [b04c1ab8-761d-49df-8328-9a042d505ea7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gsr5b" [b04c1ab8-761d-49df-8328-9a042d505ea7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005820726s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-804622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1017 20:13:47.778894  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:15.480659  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/old-k8s-version-135652/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m18.159207575s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-804622 "pgrep -a kubelet"
I1017 20:14:17.515445  259596 config.go:182] Loaded profile config "custom-flannel-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-804622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qgss8" [bc189843-350a-4148-a6f0-a625fd5c9123] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qgss8" [bc189843-350a-4148-a6f0-a625fd5c9123] Running
E1017 20:14:24.343386  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/no-preload-413711/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:25.762024  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:25.768356  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:25.779627  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:25.800953  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:25.842502  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:25.923885  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:26.085351  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:26.406953  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:27.048297  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:14:28.330531  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004206217s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-804622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.956394048s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-804622 "pgrep -a kubelet"
I1017 20:15:06.319447  259596 config.go:182] Loaded profile config "enable-default-cni-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-804622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t6tmw" [8f5c4b99-5022-4a59-8311-4b376ac5cff9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1017 20:15:06.740600  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-t6tmw" [8f5c4b99-5022-4a59-8311-4b376ac5cff9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004474087s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-804622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1017 20:15:47.702538  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/default-k8s-diff-port-740780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-804622 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m14.618610579s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-ddhd4" [cd6a26f5-0184-4676-94c5-9e6e0b8a653f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.028639794s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-804622 "pgrep -a kubelet"
I1017 20:16:00.637284  259596 config.go:182] Loaded profile config "flannel-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-804622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m7x46" [f8c3acc7-8e43-4f53-bf39-718deda4d809] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m7x46" [f8c3acc7-8e43-4f53-bf39-718deda4d809] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00459211s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-804622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-804622 "pgrep -a kubelet"
I1017 20:16:56.919726  259596 config.go:182] Loaded profile config "bridge-804622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-804622 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4rjkr" [017a8cb3-065e-44b2-915b-487a96a347bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4rjkr" [017a8cb3-065e-44b2-915b-487a96a347bf] Running
E1017 20:17:04.877436  259596 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-257739/.minikube/profiles/auto-804622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003728067s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-804622 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-804622 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-786214 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-786214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-786214
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-672422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-672422
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-804622 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-804622" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-804622

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-804622"

                                                
                                                
----------------------- debugLogs end: kubenet-804622 [took: 4.453867853s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-804622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-804622
--- SKIP: TestNetworkPlugins/group/kubenet (4.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-804622 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-804622" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-804622

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-804622" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-804622"

                                                
                                                
----------------------- debugLogs end: cilium-804622 [took: 5.249782084s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-804622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-804622
--- SKIP: TestNetworkPlugins/group/cilium (5.49s)

                                                
                                    
Copied to clipboard